About RAM usage and recomendations

Hi all!

I have been doing some testing regarding ram usage and also have been reading on suggested ram recomendations for Reality Capture. At the moment my system is AMD Ryzen 5600x, 32 gb ram and Asus Tuf 3070.

So I have tested ram usage with 100 photo and 400 photo datasets (45mp photos). With smaller dataset I’m seeing max usage of about 23 gb of ram which is fine, but with bigger dataset RC seems to jump using as much of ram as possible up to a point that system becomes unresponsive during processing. And also seems to take hours upon hours.

I tried calculating with this formula “The approximate formula is: RAM = features x images x 200 bytes.” but it hasn’t gotten me to any conclusions on amount of ram “needed”.

I’m trying to push for a high detail and improving processing times.

So my question is if 64gb ram and cpu (AMD Ryzen 5950x) upgrade would get the job done and preferably still have headroom left? Or should I start thinking of a new system alltogether with 128gbs of ram. Most beneficial upgrade would be with a new cpu and 128gbs of ram, but alas my current motherboard only supports up to 64gbs of ram.

Any thoughts on RAM usage? How far would the 64gb ram carry?

Hi Ee-Pe,
it really depends in which step your RAM usage was the biggest one. The mentioned formula is applied for the alignment. For the default setting of 40 000 features per image (Alignment setting), you can expect the following boundaries:

  • 2,000 images - 16GB RAM
  • 4,000 images - 32GB RAM
  • 8,000 images - 64GB RAM
  • 16,000 images - 128GB RAM

As you can see it also depends on the used settings. Using such HW you should be able to process 400 images easily.

There you can see in which process is RAM used:
Alignment:
Feature detection (CPU, RAM, Storage only)
Matching (CPU, RAM, Storage only)
Aligning Images (CPU, RAM, Storage only)
Processing (CPU, RAM, Storage only)

Reconstruction:
Depth map calculation (Storage, RAM, GPU - CUDA, Vram only)
Decompose for Large scene reconstruction, Depth map Grouping (CPU, RAM, Storage only)
Model creation / Meshing (CPU, RAM, Storage only)
Meshing (CPU, RAM, Storage only)

Texturing:
UV unwrap (CPU, Storage, RAM only)
Texture reprojection (CPU, Storage, RAM only)
Model preprocessing (Storage, RAM, light CPU, GPU - CUDA, Vram only)
Model Texturing (heavy CPU, Storage, RAM only)

Regarding to the HW, you need to follow these basic rules:
The processor’s clock speed is more important than the number of cores. So, if you have a 12-core processor with a 5 GHz overclock, you will get better results than with a processor with 3 GHz and 32 cores.

For GPU it’s the same. The clock speed is more important, as well as VRAM. RealityCapture uses GPU only in a part of the reconstruction (Depth-map calculation) and in the texturing processes - other than that, RealityCapture mainly uses CPU - that´s why I would suggest more focus on the highest core-clock CPU (ideally more than 5GHz) with a min. of 12 and ideally 16+ cores per socket.

Hi Ondrej,

I am currently working on processing a large mapping project in Reality Capture, where I’ve collected around 30,000 images. I divided the area into two halves, each containing about 15,000 images, and processed the image alignment. Initially, it didn’t work with the whole area with 30,000 images, as the program reported an “Out of Memory” error. After splitting the images into halves, they aligned correctly, and I assigned them GCPs and realigned them. However, now when I try to create a model from these 15,000 images, it almost immediately reports an Out of Memory error again. If I limit the model generation area to about 1/5, then it starts processing the model. I am using the default settings in normal mode. What factors play a role in allowing the model to be created from the entire area of 15,000 images at once without having to divide it into smaller parts?

I forgot to mention, that my PC is 8core Ryzen7, 128GB RAM and nVidia GeForce RTX 3070.

Hi @premyslplch,
as you can see in my previous post, in meshing are used these HWs:
Reconstruction:
Depth map calculation (Storage, RAM, GPU - CUDA, Vram only)
Decompose for Large scene reconstruction, Depth map Grouping (CPU, RAM, Storage only)
Model creation / Meshing (CPU, RAM, Storage only)
Meshing (CPU, RAM, Storage only)
As for you it is showing the error shortly after mesh started, I suppose it will be connected with Storage, RAM, GPU - CUDA, Vram only. But probably the biggest influence will have storage and RAM.
How much free space do you have on your discs? For such big model it is better to use the divider workflow: Processing of the Habitat 67 Scan Data with the 3D Divider Script | Tutorial