Distributed computing, how ?

Hi,

Anyone can tell me how to compute on multiple computers at the scene time one scene please ?

Thank you

Hi Damien

how big project are we talking here ? how much images ??
what is your PC config ? CPU+GPU

hi,

I’m the coworker of Damien, we are currently testing your software on a shoe.
We took 190 pictures with canon 7D.
The computer that we are using is a i7-5930k, with 64GO RAM and a TitanX.
At the end, the software is taking all the RAM, and took approximately 4 hours to generate a high mesh.
So the question is : Can we distribute the computing on multiple computer ?

Thank You

Hi Benjamin

for NOW its not possible to use network computation in network. But stay tuned…

How long take depth map computation and meshing ?
Depth map computation can be speed up with additional GPU, say GTX 970 to the titan and the times for depth map can be cut to approx half

Hi,

I have started the “reconstruction in High Detail”, it is computing currently for 4h, using 63giga of 64giga.
I don’t know what he is doing because the CPU is at 0% (?), and sometime we have some peaks at 100%.
The estimation of the finish was at 3pm46, but this is 4pm33 currently…
I will keep the computing to see the result.

Yeah the distributed computing can be great because we have a renderfarm here, and i can code a plugin to integrate the software in Deadline.

Hi Damien

can contact me on muzeumhb@gmail.com so we can arrange a remote access meeting ? so can take a look and say what can be improved ?

Did you plan on providing a list for resource demands depending on the scenario like available for photoscan?
That is quite helpful for planning…

Hi Götz

you mean a RAM allocations and etc or ?

Hi Wishgranter,

yes, exactly!
I never thought it might be an issue till Damien mentioned it.
Guess I was confused by your incredible speed and transferred it to RAM-resources… :smiley:

Actually in RealityCapture we have almost all algorithms out-of-core - what means, that you can calculate scenes, which you cannot fit into memory.

Algorithm-wise -

  1. Image (laser scans) registration - 3000x 24mpx images (scans)/16GB RAM. It scales linearly, so you need twice more memory (32GB RAM) for 6000 images. It depends on total features count indeed. These numbers are for 40K features/image. I can register 3000x images from 4mpx drone on 8GB RAM. When using components, you can reduce memory usage to 10%.

  2. Model computation – out-of-core - 8-16GB RAM – unlimited detail, unlimited count of images/laser scans.

  3. Coloring/Texturing - next version is out-of-core, i.e., 8-16GB RAM – unlimited detail, unlimited count of images/laser scans.

We will add some topic on this to FAQ. Thanks for pointing on this.

I’d be interested in testing out distributed computing.

i seem to be taking a bit a longer. I’m getting days to weeks on processing.

granted i am trying to make a city model. I’ve got 1600 36mp photos and another 400 24mp. between 3 cameras from a heli shoot.

800 photos from one camera seems to take 1 1/2 to 2 days on normal. but adding more seems to throw it out. getting suck without that ram.

whats better use for ssd cache, rc temp folder or pagefile?

i seem to have it crash if i run out of pagefile.

Hi Chris

What is the PC on what you reconstructing it ?
CPU + RAM + GPU

whats better use for ssd cache, rc temp folder or pagefile?
For the temp ( CACHE ) location. its best to use the biggest fastest drive, SSDs are preferred…

Hi,

I just tested to reconstruct a stadium with more than 1200 pictures with a 5D/7D.
The computer was at 80% after more than 1 day and…it failed with an “out of memory”.
But i didn’t understand where was the out of memory because :

  • 30% of the ram memory was used
  • i got 1.8 Tb free on an external HDD where the cache is.

:frowning:

It looks like i need the distributed rendering because i don’t want to restart for more than 1 day. :frowning:

I’m running this on a 5930k, 64gb ram. k4200. but i have a few more render nodes of similar specs.

dose the distributed processing split up the job to use less ram? or do they all need to fit it in?

Hi Chris

I’m running this on a 5930k, 64gb ram. k4200. but i have a few more render nodes of similar specs.
at least can speedup significantly the recon process with more powerfull GPU like GTX970 as its quite a lot of imgs for yourcard.

For the meshing issues, have instructed you so let see…

For the distributing version, its as Martin say only for PREMIUM licenses so will be not in a “trial” version available.
And yes it can process by parts so it can be split very eficiently
But in short if have the better GPU you would see better speedup on same hw. So can imagine with 1x970 + 1x980 could reconstruct model from 9000 (4500x80 Mpix ) images in 5days…

building new machine dedicated to this may make sense. but I will have to wait and see with pricing. since we already have a renderfarm and especially if i can get each chunk to fit into less than 32gb would bring a lot more pc’s into it.

also with that latest version i’ve had a few cuda issues running out of 4gb ram while doing the texturing using Visibility based. after i changed it to photoconsistency based and i stopped getting errors.

but up until this point i hadn’t seen more that about 1gb being used on the gpu. which would have meant 970’s would be fine to get. but if we start needing more ram then would have to move to titans.

anyway it been very impressive.