Using Photogrammetry


I am assuming everyone saw the UE4 Kite demo.
For the last few month I’ve been using photogrammetry, and after watching the “GDC 2015 Creating the Open World Kite Real-Time Demo in Unreal Engine 4”, I found that epic was using photogrammetry too. Of course Epic results were better than mine and I want to discuss why.
I also want to discuss the tools and techniques, the equipment, the software and plugins they used.

So here’s what I did: (I’m gonna take a rock as an example)

I took images with my cannon 450D of a rock.
I used JPEG and autodesk recap360 (Educational license, because I am a student).
Recap generated some High Res 3D models, some of them were usable some were not.

Once the models were generated, there was a few gap that I needed to edit, I was not able to due to the high vertex density. I am using 3ds max btw.
The models were no way easy to manipulate. I had to Crunch a few models, then make a retopology of the crunched models, then bake the original high res to the re-topology. And the results were kinda usable.
Does anyone know a software then can cap the holes of a 3D scans maybe use something similar to the clone tool in Photoshop but in 3D, also a software that can combine multiple 3D scans?

What Epic did:

I would like to discuss the equipment they used and how can we minimize the cost without making a major impact on the quality.

First, why the 5D Mark III? I am aware it is the best camera, but it is also the most expensive one. Does anyone suggest another one that can deliver the same result (if it can film videos too, that would be great)
They also mentionned in the presentation that they used uncompressed RAW 16 bit TIFF. So the camera need to support that.
Why did they need 3 cameras? Can we pull it off with just one? Also what lens is recommended?
What is the recommended FOV and camera distance, shutter speed and other settings to capture the photos?

The HDR capturing, did they take a picture of the chrome ball from different angles simultaneously using the 3 cameras?
How exactly do you use the color palette and the other ball in Photoshop in order to color correct? especially a TIFF16.
Will they release the plugin they used to edit the 16 bit images in Photoshop (or other application)?
Did the calibration rig (Chrome ball, palette etc…) been used in the 3D scans photos or just the photos that have been used for the HDR capturing and other?

How do we remove the shading? We can just put the assets inside a sphere that has the HDR lighting applied to it in 3ds max, and render to texture using mental ray, then subtract the result in Photoshop?

I struggled a lot with working with 3d scans and I was hoping we can create some sort of process (123 steps) from taking the photo till getting them in the engine. I managed to get alot of data from my images I just didn’t knew how to manage it all.

Also, in the presentation they mentioned that they use agisoft as a solution for the photogrammetry.

I am trying to create a detailed file to know how much money I need to raise in order to use these technique in my project.

Thank you in advance.

P.S.: if we can add drones to the math and capturing DEM that would be nice.

Please write all what you know and I will gather everything at the end into a detailed PDF guide

There’s a number of things

The gray/chrome spheres are standard for capturing HDR. The chrome is for getting the actual image, and the gray is for checking light levels, there’s information online for how to do that type of thing.
As for the actual geometry, best thing is to take the photos on an overcast day so you don’t get the sharp lighting. There’s also better software for that, like Agisoft Photoscan.
I don’t think that specific camera is required, really any DSLR is going to give you good results.

Thank you for the reply,
I did not know about grey ball but I kinda guessed why it was used.
I am aware that the chrome ball is for the HDR image capture, I was just mentioning the steps, but I was wondering why they needed 3 cameras? Did they use the 3 cameras to take simultaneous pictures or their was 3 people, so 3 camera for each?

Yes my 450d gave good result but I am aiming for great. The project that I am working on is more of a graphics demonstration, that is why I need to get the best 3d scans possible. What camera would you expect to give the best results without going overboard with the price? And what lens would be the best? Also note that I might attach this camera to a drone to scan large structures, or extract DEM Data.

Anyway, what really I would like to know is how were they able to manipulate the scans? Because 3ds max was not able to handle them. How can cap a rock for example? join scans?

And how can I get my hands on the TIFF16 editor (the one that replicates the work of the brighter image on the darker one)? because I doubt they will release it.

Agisoft is designed to work with scans, there’s also Meshlab which is a tool for cleaning up scans.

I doubt they used 3 cameras for doing HDR captures, they either had multiple people taking photos so that 3D scanning would go faster, or maybe they made a rig that allows them to take 3 photos at once of an area—Many 3D scan studios use a setup with like 20 DSLR cameras setup so that it can take all photos at once for doing things like scanning people where the subject can move around which you don’t want.

I don’t know what would be considered the best camera for it–Like I said pretty much any DSLR is going to give you good results, you’ll probably want one with the least amount of noise at high ISO levels so that you can take pictures really quick and sharp. You’ll also want to avoid depth of field as much as possible

Okay thks, I will make some research on what talked about so far, and I will post it here when I’m done.
And if you can think of anything please do let me know in this post.
Thks again

While I can’t answer most of your questions, I can tell you that the color palette (it’s probably the X-Rite Color Checker) is used when you import the RAW photos into Photoshop to do auto color-correction.

Edited to add: I also just noticed one of the lenses they used - 8mm f/3.5. Super wide-angle lenses will always be more accurate on a full frame than an APS-C frame. You can use that lens on your 450, but it won’t be 8mm on your camera - it’ll be a ~13mm lens. That’s not to say you need a 5D MkIII, you just need a full frame. So for Canon camera’s, I believe that only leaves you with 2 choices: the 5D or the 1D, which is even more expensive.

Well, I found the 6D to be much cheaper than the 5D.
Both cameras are full frame, and there is minor differences between both of them.
The 6D also has a GPS which might help with the DEM capturing (or not).
On the agisoft website there is a bunch of video tutorials and pdf instructions that a person can follow.
Also for information about anything related to HDR I found this link very useful:
3ds max has feature for using unedited HDR: The Page You Were Looking for Was Not Found
As for editing the 3d scan Meshlab does seem to do the job.
If I get funded, I’ll be posting a much detailed update in few weeks/month when I get to test everything and I’ll document it.

not sure what is round shot vr drive, any idea ?

It’s probably this: Home – Roundshot – fast 360 degree panoramic equipment

Was likely used to generate a skybox so the lighting could be matched.

Just an update, I did not forget this thread, but I am still making research, and getting equipment, and experimenting. This will not be done soon, meanwhile I would recommend joining the 3D Scanning Users group on facebook, they will help you there.

Sorry it took me a while to get back to this thread.
I finished my research, well a big part of it.
If anyone have questions about this topic, please post here so I can answer them.

Step 1:
Get a camera, any camera will do for a beginner, but the sharper the image, the less compressed, and the higher the pixels count is better. At this stage your phone camera will give you decent results.
Step 2:
Get a free trial of Agisoft on their website, and experiment.
Start with simple target, rocks scan really well, stay away from reflective objects for now.
Try to take the least number of picture for practice. For some scans you might need lots of pictures, but sometimes that is not possible, or you may not have time, that is why it is better to practice how to take the least amount of pictures that will get you usable results. Also not always a larger number of pictures can help, especially if these pictures are really close to each other, they can generate errors.

Step 3:
Get more professional
Get a better camera, a DSLR is recommended, a canon 5D Mark III, a Nikon 810, or SonyA7R are best.
A tripod or monopod is always a good idea, you need to reduce the motion blur and the noise in the picture, so keep the ISO low, shutter speed fast, and the DOF, well that depends on the target. You need to compromise between these three values.
And as the scans get larger (more pixels, more images), you will need a faster computer with lots of storage and RAM.

Step 4:
Scan the same items multiple times, once on a cloudy day and capture a panoramic HDR, once with CPL filter, so you can extract the specular and remove the shading.
A color checker will help you in correcting the colors.
For the remove shading phase, I have not get to that part yet, but what you need to do is texture bake the lighting on the model in max or Maya using the panoramic image, you should take the images with a chrome ball in them, therefore you can adjust the panoramic HDR rotation on max or Maya.

Hope it helps

I gotta say one thing, be picky about your overcast lighting, even in overcast lighting I’ve ended up with a couple of assets that have highlights/shadow that are too prominent for my liking, And this is even after I removed as much shadow/highlight as I could in Lightroom (also, use lightroom!). If you head out and it’s still sunny fairly often you can actually cheat a bit and use shade. But either way I must say you NEED a tripod and shutter button if you want decent results. A full frame camera would be nice but I don’t have the money for that myself so I use a Canon 600d which works very nicely. You might be able to get a 550d from ebay (you don’t need a 600d over a 550d) but it’s still an investment. Another thing is when you capture have a shot with a tape measure in it so you have a sense of scale, I made the mistake of not doing this so I used guesswork, don’t do that if you can help it XD

Here’s a scan and the resulting game model I did the other day, couldn’t avoid the shadow here without a sort of stand for the log which I er, didn’t have on me

The scan looks really good! I don’t like the game model as much, but I am sure the process to create that could be improved.
What I would like to know is, how much time did you spend in total to finish this log model?
If you make another scan model of similar complexity, it would also be interesting to see how much faster you could get with experience. :wink:

Here is one of my early experiments, I still don’t have a good process for removing shadow data so the albedo on this is pretty crappy:

3D scan:

In game:

As others on the thread have mentioned though, the camera doesn’t matter as much as the lighting. I’ve gotten good captures with an iphone 5s on a cloudy day. The trick to it is good lighting, and make sure to have lots of overlap in your shots and coverage from every angle of the object.

I think 3D scans, Lidar, etc, is great, for photoreal texturing though these tools etc. seem hard to beat (one can use their database or generate one’s own photo-scanned texture). Bitmap 2 Material itself can generate some very impressive materials out of a single photo by allowing you to split and then customise (in Substance Designer) the different diffuse, roughness, normal, etc. maps:

Curious as to how this was achieved, apparently is involved:

Then you have pointcloud photogrammetry:

The Camera is important just not critical, the better camera, the less picture you need to take (if the resolution is higher), the color can be more vivid, you might get less noise, but any DSLR camera, if used right, will get you there.

Removing Shading and retopology are really hard things to figure out, I have tried many workflows, but when you are working with millions of polygon, the process can be super slow, and many software can crash.

Capturing images for an object that you want to scan, can take minutes (unless you are scanning a building or something large, or something that requires high precision). Processing the images in Agisoft on the highest settings can take hours, and that depends on your PC speed, and you need RAM, lots of it. Practicing your scanning techniques, might not make that much quicker, but it will help you get it right from the first time.
I scanned a tree once with 5300 Nikon, took 180 RAW pictures of the trunk. Got amazing results. 5 minutes capturing the images 11 Hours processing in Agisoft. Unfortunately, for a split second the cloud moved a little and let some light in, which made the pictures I took during that second unusable in Agisoft and it ruined the scan.

I am still working on photogrammetry, and if there is something that you want to see me work on, please let me know and I will post it here.

Also, if you are really into 3D scanning check out the facebook group: 3d scanning users group, there are a lot of experienced people that might help you there.

As it was mentioned, the most important is lighting. Recently I’ve invested in studio lights and background, it gives stunning effects :slight_smile:
Now I’ll need to buy better camera, because currently I’m using old 16MP digital compact camera :stuck_out_tongue:
I’m using Agisoft Standard to make scans :slight_smile:
Here are examples of my scan and game model, you can see more at my sketchfab profile:
Deer skull 01 by Mikołaj Spychał on Sketchfab

Croatia Rock 01 by Mikołaj Spychał on Sketchfab

Since it is a thread about3D photogrammetry, I’d like to share an experiment we did with a large 3D city model generated from aerial photogrammetry and integrated into UE4.9

Just for your information, the cathedral dataset has been generated from a ground laser scanner, not using photogrammetry techniques.