Hello,
A couple of things that I’ve found with RealityScan is that the processing takes forever using “detail” mode yet doesn’t seem to yield a higher quality model. More triangles and vertices yes, but there are still some glaring issues with scanning an object itself. First of all it seems the LiDAR points are either not working as intended or are non-existent. When scanning flat surfaces (either on an object or something like a table the object is on) there is always an expected amount of what I’ll call “wibble”, this issue seems to be amplified when using RealityScan. Basically anything flat tends to show up as wavy or spikey even when taking the limit of 200 photos. I’ve tried to go into the settings on the app but there is essentially nothing there to change. Is there a way to enable the LiDAR to be more precise or a way to get the scans to come through in greater detail? I have an iPhone 14 Pro and am sorely disappointed with the performance of the app, especially given that there are videos on this forum with unbelievably detailed scans that I can’t seem to replicate no matter how hard I try. Is this something that I’m doing wrong or is it an issue with the app?
Another smaller issue I’ve found is that the app refuses to use the macro lens on the iPhone 14 Pro, meaning I can’t get in close to get details because the telephoto lens is the only one being accessed. Getting in to close to an object to try and capture more detail causes some strange distortion. Is this able to be changed somewhere to allow the app to use the macro lens when applicable? If not, is this in the works for a future update?
TL;DR - I can’t get my scans to be good quality like the tutorials and the bread.
Thanks in advance for any help.
EDIT: Adding Pictures to help explain.