How long to wait for very good "picture to 3d model"

We know it’s hard and complicate to take 30 pictures of a object, then resize the polygons and so on. Actually it’s easier to model with your hands.

When will it happen to make in 1 minute a beautiful model without your work? We know it from salesman. Online shopping will slowly destroy that job. When will be that with 3d artists?

Probably never happen. 99% of what is done with 3D packages is usually unrealistic (think of how many artists work on movies creating things like buildings blowing up, aliens, monsters, matte paintings for environments that don’t exist). There is a huge market for Arch Viz and other things but the general idea is to create things you can’t photograph. There are large scale 3D scanners that Hollywood uses for actors and the like that will do what you’re talking about but that’s just like mocap. It creates a starting point for the real work. I’ve yet to see any point in creating a digital actor running down the street when you can film it or photograph it. It’s a waste of time. I think the artistry is more about interpreting things not capturing them exactly, that’s why Pixar’s films do much better than a film like Final Fantasy (that and the uncanny valley).

Also, no 3D scanning system can work with metal or transparent objects, since the reflection either changes based on the viewing angle or you can’t actually see the surface in the first place. When doing 3D scanning you’d be surprised how many things are too reflective.

Not to mention the unusable geometry 3D scanners out put. Which is fine for a statue or building(if you have the poly ceiling). But for animated characters it’s just worthless.

Well, realistically, you’d want to paint the object completely neutral gray and re-texture the object. I believe this is being done for The Vanishing of Ethan Carter.

3D scanners are getting better, and more importantly the software behind them is getting better. Right now a lot of the geometry from scanning is pretty ugly, but there is a lot of talk about improving it.

The latest version of Meshfusion for Modo is an absolute marvel in how it can do really complicated mesh booleans involving several pieces of organic looking meshes and then somehow, with some kind of voodoo black magic, produce clean sub-division ready geometry. It even puts in perfect edge loops and chamfers the edges etc. ZRemesher (the retopology tool in ZBrush) is pretty impressive too… we usually end up fixing some issues with the low-poly mesh in Maya, but overall, I’d say it produces pretty good geometry. How hard would it be to import a scanned mesh into ZBrush, touch up the bad areas, then use ZRemesher to make a game-ready mesh? I think it would be pretty doable.

I’m not arguing that 3d scanning is replacing anyone’s job yet, but I wouldn’t be too quick to say that 3d scanning won’t become a major tool for game makers in the future. I’d say maybe 5-10 years from now.

If there’s anything real in a game then probably many developers use scanning or photogrammetry already. If you’ve got a character that’s based on a real person then it wouldn’t be surprising if they get a scan of the actor.

Have you guys tried the scanning 123D app for android and iOS from Autodesk? Its pretty neat, a lot of bugs. But fun anyways.