Draft VS NormalVS High Detail

Hello, i use capturing reality to reproduce motocross and superbike tracks.
We take at least 1.000 images or more and i have some doubts about wich reconstruction mode to use.
Before all i have to say tha we use the mesh to calculate a Heightmap, so we don’t need very large polycount, max 5/6MLN.
But we need accuracy, and here’s my dilemma.
If i reconstruct in Normal i have a 100Mln model to simplify.
If i reconstruct in High Detaili have a 300Mln model to simplify.
If i simplify both to 5Mln, i have a similar results?

I don’t know if reality capture define planes and linear surface to simplify it more and retain details in the areas that are more in need of it.

My colleague think that we can achieve the same result, processing in normal and simplify the model to 5 Mln tris.
I think that it would be better to calculate in High detail.

Can you help me to understand the process used by your software?

Hi Christian Perasso

In This case you use pretty strong SIMPLIFICATION, i can say its not work for you to use HIGH recon mode, NORMAL mode should be OK for you use case. HIGH recon mode recover more details in general ( sharper results )

To add my 2 cents:
Even from 100 to 5 mln is such a tremendous simplification, that I don’t think it would matter much if you started from 300… :slight_smile:
Why not try it out and post the results?

Thanks you for the clarification, i understand that the simplification made by RC is to all the assets, without any distinction between plane sor surfaces and objects.
For our needs it would be great if RC simplification would work in a different mode, because in a linear piece of tarmac we don’t need much details, in the curb and in various other objects, instead, we need as much details as possible.
I know that it would be va ery heavy calculation, but I think that a selection tool in RC would be perfect to select zones and apply more or less of the simplification desired.

Thanks a lot for the comments.
For now I’ll go with the normal details, maybe in future I’ll make more experiments and I’ll share some info about this topic.


even if you are going to simplify all the models down to 5mil.

the high detail version will be better.

but depending on what your doing, it may not be that much better than the normal and it will take a lot longer.

so its worth trying with the normal version first.

What you are looking for is ZBrush. I have been looking into it myself a while ago and was quite impressed. It’s not too expensive and one can get results within a day or two. There is a tool that does exactly what you are looking for - you can color the high detail mesh in areas where you want less simplification. I can’t imagine RC implementing something like that - you can’t have everything in one program and RC is doing what it is supposed to well enough. There are also many core features that need improving… :slight_smile: I personally decided that the in-program simplification is actually doing not bad at all - it doesn’t just smother everything. Edges are preserved and plain faces reduced more, within a certain range of course. I t also depends on the object. In your case I wouldn’t worry too much. You will have to try a bit for yourself, simplification is not a precise science… :wink: If you have access to Agisoft, I find their simplification tool is not bad either and differs a bit in the resultiong details.

Just out of curiosity: What accuracy exactly are you looking for? Are we talking about centmeters or half meters? Since it is - I presume - a dirt track and will be shifted slightly with each race, I can’t imagine you want to be on the cm side. How big is your area and how much is covered by one image, meaning what measurement does one pixel have aproximately? Did you use drones?


Why do you think it matters with such a rate of simplification? Are you talking about sub-pixel accuracy of individual vertices or overall geometry? Do you have examples?
Doesn’t it depend on what you are trying to do and how you want to achieve it? I guess that if you have a fixed set of images that you have to work with (aerial images for example) and then need to squeeze the last bit od detailing out of your material, then high detail is probably the only way to go. If however you want to scan something close range where you can go back and take more images if you need, I think that is also a possibility and doesn’t necessarily require high detail calculation. I have tried that on occasion and the result in my cases was that high detail spews out WAY too many points for my needs and also takes WAY too long. Normal Detail is also pushing it sometimes in my opinion. My examples are architectural features of medieval buildings, e.g a base of 50 cm diameter and I don’t even have to fill the whole image (10 mp) with it to get results that are good enough to be able to measure the moulding accurately.
But I am keen to learn more! :slight_smile:

Thanks Gotz, we use Zbrush,Max,Marmoset etc. but to export a 300Mln model is very difficult for space and time reasons, without metion the time required to open the OBJ.
I think is better if i explain to you a little more of what we do with RC:
-we use a couple of Drones to take aerial photos
-we reproduce Motorcross and superbike tracks by that
-for Motocross we simplify the Normal detail model to 4/5MLN and calculate a Heightmap
-we put the heightmap in unreal and we model the props, building etc in substitution of the photogrammetry ones (not exactely but i think you can immagine the process)
-for the superbike tracks we want to do the same

Photogrammetry is very very good for motocross and we don’t need much more, but for the tarmac surfaces,we have problems.
Laserscan data are better in planes but they retain so much details that, for us, it’s a waste of data, although we need precision at 4/5cm (laser data goes up to few mm).

Beeing able to reproduce plane surfaces with much less details than other parts of the tracks would be perfect for us.

Also i’m keep learning, and your comments help me a lot.


Hi Christian,

so you seem to be better equipped than I am! And probably more experienced as well… :smiley:
But I am glad if my rambling helps!

So do you have difficulties with the tarmac due to uniform color and that is why you need higher detail calculation?
Then you might try to follow the track at a lower altitude next time so you get distinguishable texture?

As I understand it, you are not primarily interested in the polygon or vertex model but the heightmap, which is nothing else than a raster image, right? So your problem is not the high polygon count per se but rather that it is one of the steps to achieving the heightmap?
So what you really need is software that can produce a heightmap from a large point cloud, right? I never used CloudCompare, but as I understand it, that is one of its features. Have you tried that yet? I looked real quick and it says it can do 2 billion. No idea about the performance though…

Good luck!

Hi Götz,
i’m really a newbie on RC and the other different softwares, I’m working with it from a couple of months.
But I learn quickly and I fly with drones a lot…
Yesterday I made the track of Imola in Italy, I worked all day to stitch all the photos (made with dronedeploy and a little parts with altizure) but I have some banana problems:-))
Sadly the profile of the phantom 4 pro I use is not supported by RC and instead of getting a linear model I have a broken one.
Maybe I have to set some parameters and tomorrow I’ll make a new post with results and request for more info.
For now I can tell you that PS worked better with this kind of asset, Imola it’s a 5Km long.
We use almost everything, PS, Zefyr, Cloudcompare etc…and I think that different assets could give better results in one software or another.
For example motocross track with a total area less than 1Km squared is very good with RC not much different from PS.
Maybe I made some mistakes and tomorrow you’ll read about it, but I’m here to learn and find some help.
P.S. I use 24 CPU 3.4 GHZ, 2 quadro 4000 and 128 GB Ram, but I have some issues with the pagefile.sys, it goes way too high (yesterday 330GB), tomorrow I’ll make a post also to know more about that…

Hi Christian,

that kind of problem (not properly aligned in Imola) is usually Wishgranters speciality to help with. I wonder why he hasn’t posted anything since a couple of days. That’s not like him at all - maybe he’s just taking a break for once… :slight_smile:
Yes, there seem to be specific setups where one software is better at than another. I don’t think it is possible to predict it though. What I came across recently is that somebody suggested to fly at different altitudes and also take angled shots as opposed to only straight down:

chris wrote:

are you shooting straight down?

I’m find shooting at down 45 degree’s with phantom 3 doing an orbits works pretty well. rather than shooting in the classic grid pattern.

I did one today that worked well, only around 250 photos though, got a high detail model in a hour or so.

now I’m just trying to add another 10000 ground level photos. and then I’ll have another 2000 of so heli shots from previous shoot to add after that. I’ll see how i go with all of these, but I’ll probably run into issues.

and the following posts…

Also the pagefile has been discussed in this forum to some extent - if you haven’t looked already…

Wow, your hardware is quite something - like a new Ferrari compared to my old Fiat… :wink:

Yep we are using two kind of photos, the zenital and the angled ones.
We took 2500 Images and at different heights, but RC gave us the banana model.
PS worked perfectly with them, today i’ll take some measures in the program and i’ll compare it withthe one Imola gave to us.
I’ll let you know the accuracy.

I’ll read the previously post on the pagefile.sys, thanks for letting me know.

Hmm, weird.
Is this error marginal or really obvious?
Did you try to use calibration group for identical lenses?

Btw, you should probably open a nex topic, because we are way off the title of this one… :slight_smile:

about the simplification levels. and detail settings.

I did quite a bit of testing. but each case will be different.

I was shooting with dslr from heli. so i was quite far away, and needing all the resolution i could get.

I found i was getting better building edges and cleaner models after simplifying from a high detail model vs normal.

but i was also getting really long processing times, in the 1 - 3 week range.

but its going to be best to make sure all your alignment is working well before you try running it out on high detail. otherwise you can just end up wasting your time.

Chris, do you mainly do aerial stuff?
Seems like the scenario I described where you just don’t have many options to take additional images.
Because in the end it is a question of resoltuion. If you (can) make sure it is high enough, then the inaccuracies of Normal processing can be kept below your indendended threshold.

With my projects - often highly irregular historic buildings, it is quite hard for me to predict alignment quality or rather find out where there might be something amiss. Of course, the inspection tool is incredibly helpful in that respect but it does not show actual errors in alignmen, which sometimes happens in some obscure corners. Do you have any idea how to pinpoint possible troublemakers easily?

My understanding is that RC uses adaptive subsampling during Simplify, as does ZBrush with Decimation Master, as well as with ZRemesher when retopologizing to transfer detail hi to lo. My work is focused on extensive interiors in rock (caves), but I wanted to compare the results of workflow through ZBrush v. simplifying in RC, differences weren’t night and day and question whether it’s worth the trouble round tripping through ZB. I’m clear this isn’t a one size fits all, agree there’s a key difference in shooting from some distance and gleaning all the high-frequency detail out of imagery, versus working close to your subject, as I am, capturing 52 MP images from 2-6 meters.

For my comparison test I captured a chair with highly detailed wood carvings, contrasted by broad smooth sections in the upholstry, making it easy to see what adaptive subsampling was actually doing to preserve detail in the carved parts while minimizing polycount across the domed cushions. The snip of the scene in UE4 doesn’t convey what’s needed, but flying around the chair up close from one to the other, I can say it’s really hard to see much difference. Reconstruction in High produced 300 M tris, the ZB chair on the right is 700,000 tris (from 350,000 quads), the simplified (uncleaned) RC chair on the left is 500,000 tris. Still a heavy asset, but that wasn’t the point of the test.

As for how smart the adaptive subsampling worked, I’ve not tweaked ZB, but out of the gate I’d say it threw less detail into those cushions than RC. The ability to tweak would seem critical to optimizing a mesh as the world isn’t one-size-fits-all in what it dishes up in 3D capture.


Reading the above posts and weighing against my own setup, it does seem that working with high-resolution images from close range then using High quality in Reconstruction is overkill.

Hi Benjamin,

nice work!
Also thank you for the test!
Could you upload an untextured screenshot, so that the polygons are visible?

I also have a feeling that using Zbrush can be really good for extreme situations, but might rarely be worth the effort. Of course, the internal simplifier in RC (or others) won’t do as good a job as a carefully prepared model in Zbrush, but I think good enough for many cases.

And I say it again, from 300 mil to about 600.000 is a tremendous step. On average, that means that 500 polygons will be merged into one! Imagine how much of the small detail will be lost.
The whole thing might be different, if one tries to stay within a certain limit. From what I read about the subject I thought that factor 10 is pushing it already in terms of ruining too much details. And especially if you cannot reduce too much polycount, the quality of the algorythms should play a larger role than with extreme examples because every polygon counts. And if the algorythms leave too much details on even surfaces, the polygons will be missing in the details.

Hello Götz,

I lied, RC mesh isn’t 500,000 tris, it’s 1 M, but that wouldn’t explain the lack of much difference in the outcome. Of course, I’ve not pushed this downward to a really light asset, like 50,000, which in a big set is where things really have to perform. I’m attaching a couple screen shots of wireframes, here’s the 700,000 faces mesh from ZB:


And here’s the 1 M tris mesh from RC:


Clearly, way more can be done to reduce polys on the cushions, not to mention elsewhere in the carved parts.

Hi Benjamin,

thank you!
Would be interesting, to do another one with RC to 700 as well, from the original of course.
Because to be honest right now I like the RC one better - the detail seems to be crisper.
And ZB doesn’t seem to have so much fewer on the cushions either.
I wonder if the differnece is the structured vs chaotic distribution.
If you look for example at the edge of the seat, the difference between flat and carved (tiny nobs) areas is really pronounced, whereas ZB has more or less the same density…

Nevertheless, they are both really good - how many images if I may ask?

And honestly, if you need to downscale them even further to 1/10, I really don’t think we need to talk about normal or high anymore, but rather preview or normal… :lol:
Because in the end the main difference between the three is the resolution: 1:1 in high 1:2 in normal and 1:4 (I believe) in preview. So as long as there aren’t any other corners that are being cut, that is it. And if, like in your case, you start with an incredible resulution, you can very well deal with a little bit less… :slight_smile:

Could you add a preview shot to the selection?


167 images on the chair. I’m not sure what you mean with “a preview shot of the selection”.

There’s something else here which I’m told by someone much farther down the road in UE4 needs to be considered. According to him, “One of the issues with the tiny UV islands isn’t an issue for the texture but with the lightmap and baked lighting. When setting the object to static and using static lighting, those UVs can create incorrect or missing lighting if used in the second UV channel.”

I’ve not seen for myself what issues this causes in “the second UV channel”, much less why one needs a second UV channel, but not to jump into that just now. Would like to settle these issues first to simplify the conversation. I’m tasked with some work before I can run your proposed test, but can do.