Machine with more that 128Gb Memory ?

Is there anybody with access to a machine that runs with more than the typical 128gb ram limit?

I am mostly looking for core i9 machines that have 512gb running.

In my experience RC gets really slow when the ram usage jumps to paging…

If it does, something is wrong. Under normal circumstances it will not. Use virtual memory I mean.

This bug will occur at any amount of RAM (afaik). It might even use more than it would under normal circumstances, which might make upgrading futile.

I experienced this every time we use above 2200 photos, surely depends on the feature detection etc… but during alignment I am sure its the memory limitation that slows down tremendously

 The question I have though, is if more ram is possible at all, how are the AWS cloud nodes performing with 480gb ram I think?

Or is anyone successfully using an i9 cpu with more than 128gb ram on some x299 board?

In https://80.lv/articles/full-photogrammetry-guide-for-3d-artists/, Vlad says:

“All the calculations except the camera alignment doing in RealityCapture out of the core. Imagine a 3 billion polygon mesh on just 16Gb of RAM!”

Is your problem during Alignment?

Yes, during alignment. Last alignment took 13 days… 2000 photos, 50mp…

Hi daniel,

although 50 mpx is quite something, I am pretty certain that 13 days is completely off the scale. There is something wrong.

Is the object very complex? Have you tried preview? What happens if you reduce the resolution (either in RC or manually beforehand?

If nothing works, I think that would be worth a bug-report…

I am very certain this is caused by the memory limitation of 128gb, which is constantly peaked and hence page cached to disk. Thus I am wondering if anybody has a more enhanced machine running.

This is only the alignment part, so no preview option there… I could for sure decrease the feature detection amount but think the quality I am aiming for justifies the sensitivity/initial feature count settings.

What about the idea, which I’ve only just recently seen suggested, that RC doesn’t use those carefully-maximised Features, and/or those Tie-points, and/or the sparse point cloud, in Reconstruction. RC uses those only for the purpose of Aligning the photos, then makes no further use of them, but starts afresh (with the hopefully well-aligned photos) for Reconstruction.

If that’s so, then sensitivity/feature detection/feature count need to be only high enough to achieve good photo Alignment, but apart from that don’t contribute to final result quality.

I have done 2500 21mpx image reconstruction and it took in normal resolution in about 12 hours. The result model had 360M polygons and there I simplified it to 5M polygon object with 45 8K texture maps. It runs nicely as a VR model.

How ever… When i tried to reconstruct the from same images I had to try it at least 4 times before success… So i guess i was lucky on first time.

Machine i used for reconstruction is Ryzen 1800x + 64G ram + 1080ti + 500G m2.nvme SSD.

 

 

 

Hi daniel,

my suggestion wasn’t supposed to be a mutilating workaround but a way to pin down the problem!  :slight_smile:

If the alignment succeeds with a lower resolution, it might be a clue.

Since RC advertises 7000 images and much more, I cannot imagine that your image set will push it over the limits…

Its fine, I am almost certain its the alignment settings I use that cause the intense calculation. Yet lowering the feature definition ultimately lowers the accuracy as well.

Ill go down the machine route with 512gb ram, so if anybody here has such a machine running, please share the impressions…

Hi Daniel,

We have a machine here with dual Xeon E5’s, 256gb ram and TitanX SLI. We are currently testing it as a RC machine and it would be great to get some comparison / bench marks info. I have very little experience with the software but could run some tests for you? 

 

 

Hey George, are you running Windows 10 pro then? Would be curious to see RC actually using more than 128gb ram.

If you happen to have a decent asset like 2000+ images at 40mpx each, I can pass you some alignment settings that should surely use up your memory.

Maybe you guys are interested in this benchmark?

https://support.capturingreality.com/hc/en-us/community/posts/115001227911-Hardware-Optimisation-Benchmarking-Shenanigans-?page=4#community_comment_115001292091

Yes indeed - what happened to that initiative?

Hi Daniel, 

Its currently running Windows 7 Pro, which can only use 192gb of the installed RAM. I am currently running the benchmark that Ivan made with an asset of approx 1200 images at 17mpx. I will post benchmarking data when its complete. 

I will have a chat with a colleague and see if he has a more taxing asset!

Oh, and daniel, could you not just post the settings here?

Did you raise the Max Features and/or Preselector to a high factor?

I have just posted the result in the benchmarking thread :slight_smile:

Hello,

I have following issue, when calculating depth maps 98% RAM used. 

Also when check GPU - CUDA process it is jumping from 20% to 90%, not stable as CPU calculation that runs on 100%.

project (4105pics, 9.997.872 ponit count)

 

Mb X570,R9 3900, 2080ti, RAM 64bg