Hi,
I am assuming everyone saw the UE4 Kite demo.
For the last few month I’ve been using photogrammetry, and after watching the “GDC 2015 Creating the Open World Kite Real-Time Demo in Unreal Engine 4”, I found that epic was using photogrammetry too. Of course Epic results were better than mine and I want to discuss why.
I also want to discuss the tools and techniques, the equipment, the software and plugins they used.
So here’s what I did: (I’m gonna take a rock as an example)
I took images with my cannon 450D of a rock.
I used JPEG and autodesk recap360 (Educational license, because I am a student).
Recap generated some High Res 3D models, some of them were usable some were not.
Once the models were generated, there was a few gap that I needed to edit, I was not able to due to the high vertex density. I am using 3ds max btw.
The models were no way easy to manipulate. I had to Crunch a few models, then make a retopology of the crunched models, then bake the original high res to the re-topology. And the results were kinda usable.
Does anyone know a software then can cap the holes of a 3D scans maybe use something similar to the clone tool in Photoshop but in 3D, also a software that can combine multiple 3D scans?
What Epic did:
I would like to discuss the equipment they used and how can we minimize the cost without making a major impact on the quality.
First, why the 5D Mark III? I am aware it is the best camera, but it is also the most expensive one. Does anyone suggest another one that can deliver the same result (if it can film videos too, that would be great)
They also mentionned in the presentation that they used uncompressed RAW 16 bit TIFF. So the camera need to support that.
Why did they need 3 cameras? Can we pull it off with just one? Also what lens is recommended?
What is the recommended FOV and camera distance, shutter speed and other settings to capture the photos?
The HDR capturing, did they take a picture of the chrome ball from different angles simultaneously using the 3 cameras?
How exactly do you use the color palette and the other ball in Photoshop in order to color correct? especially a TIFF16.
Will they release the plugin they used to edit the 16 bit images in Photoshop (or other application)?
Did the calibration rig (Chrome ball, palette etc…) been used in the 3D scans photos or just the photos that have been used for the HDR capturing and other?
How do we remove the shading? We can just put the assets inside a sphere that has the HDR lighting applied to it in 3ds max, and render to texture using mental ray, then subtract the result in Photoshop?
I struggled a lot with working with 3d scans and I was hoping we can create some sort of process (123 steps) from taking the photo till getting them in the engine. I managed to get alot of data from my images I just didn’t knew how to manage it all.
Also, in the presentation they mentioned that they use agisoft as a solution for the photogrammetry.
I am trying to create a detailed file to know how much money I need to raise in order to use these technique in my project.
Thank you in advance.
P.S.: if we can add drones to the math and capturing DEM that would be nice.
Please write all what you know and I will gather everything at the end into a detailed PDF guide