Cloud computing support (distributed rendering)?

Hello,

Reality Capture is certainly one of the fastest photogrammetry tools out there, but I have the option of using cloud computing - so I could dramatically increase the number of GPUs available.

Is this supported by Reality Capture?

This is something my company would be really interested in.

Breravin,

I researched this and right now it’s not worth it.

First thing, RC’s limit is 32 CPUs and 3 GPUs, good for cloud computing, but won’t take advantage of the monsters either. You can contact them for multiple licenses to up that if you’re keen, don’t know how common that is though and I think part of the restriction is that the gains after those numbers are probably marginal (if one were to go by competing products and mult CPU/GPU benchmarks).

I did a 32+CPU test and 3+ GPUs on AWS after RC went to Steam (as the license is now more portable) - it was good! But not as good as my 6700K and 1x 980 ti GTX at the time. That’s because virtual CPUs and *especially* GPUs don’t have the same gains as a non-virtualized machine (which is essentially what the cloud is based off of).

You can get services where the GPUs are bound directly to the service, but the returns are again, less than desired. CUDA cores is one of the things we look at, and with that 3 GPU limit, the cloud based GPUs will always be behind the CUDA count on one that is underneath your desk.

This does bring up the question that was discussed a year ago - does CR have any plans to release benchmarks? Or a series of images that the community can benchmark against? If I were to be able to buy another rig for RC rendering down the line, that data would come in very handy. Puget systems did a full write-up for Agisoft PS, I think something half as detailed for RC would help the users and clearly underline the speed of the product.

Thanks for the reply - good info. It would be really interesting to see that limit increased.

I can kind of see the diminishing returns that happens with gpu’s.

but cpu limit should probably disappear.

already now you can’t run low/mid xeons like 2630 v4 without disabling hyper threading. and this is only going to get much worse.

I also think the the cpu meshing part could do with dr too.

chris wrote:

I can kind of see the diminishing returns that happens with gpu’s.

but cpu limit should probably disappear.

already now you can’t run low/mid xeons like 2630 v4 without disabling hyper threading. and this is only going to get much worse.

I also think the the cpu meshing part could do with dr too.

I am no developer and photogrammetry is still magic to me, but I wonder in my layman’s knowledge how difficult it would be for CR to enable RC to become a render farm. I know it’s a lot more complicated than this, but it’s been what I’ve been curious about lately: RC components out models, with more components the larger the model gets - I wonder how hard it would be to have another computer attack components 1-10 and another attack 11-20 (knowing they’ll be some processing overlap and images need to be identical on both, etc.). Join them onto the primary render machine when done, etc. etc.

I picked RC over the competitors for speed - it makes a whole set of projects that were impossible with the competition due to total render time. Still, if I get business, I want the fastest machine I can throw at it, but I’d much rather have 2 x 5k computers than 1 x 10k computer…

This would be a nice feature but would be crazy expensive.

I use both RC and the competition to see which one gives better results for something I need. I mostly model several large buildings on a large site. I had to stop RC after a few days because it appeared to be frozen doing nothing to progress. I am trying again with a much smaller region to see if it can handle it.

At the same time, I have the competition processing a dense cloud using 20 nodes on a university HPC (several with K20 cards). It is progressing very quickly. I have 20 floating licenses for the competition and the HPC nodes are equivalent to two or three of my desktop.

For those who have access to high performance computing nodes, using a single desktop to do all the work is the old way of doing things. I’m now looking into using Blender on the HPC nodes to render model animations for the same reason.

has anyone tried the gpu part local and cpu on aws (or similar)?

I’m mostly looking for high amount of ram. 256gb - 1tb+.

looks like you can get it on aws with 64-128 cores. but that’s a bit of a waste.

dose anyone have suggestions other than aws?

Hi all
RC is prepared for network processing etc but it is not ready for deployment yet, as one missing part is not yet finished, but it is a hot requested feature. We have no ETA for this for now, but for 2017 it is sure to be there…

Wishgranter wrote:

Hi all

RC is prepared for network processing and etc but its not yet ready for deployment, as one missing part is not yet finished, but it’s a hot requested feature. We have no ETA for this for now, but for 2017 is sure to be there…

Woot!

chris wrote:

has anyone tried the gpu part local and cpu on aws (or similar)?

I’m mostly looking for high amount of ram. 256gb - 1tb+.

looks like you can get it on aws with 64-128 cores. but that’s a bit of a waste.

dose anyone have suggestions other than aws?

Well remember RC only works to 32 cores and 3 GPUs; I think the networking bit is the most exciting for scalability and performance potential.

I’ve got RC running on this in AWS:
Model GPUs vCPU Mem (GiB) SSD Storage (GB)
g2.8xlarge 4 32 60 2 x 120

It ran well, but not faster than my machine at the time (4x4Ghz [6700k], 32GB of RM, 980 ti). I think AWS binds those GPUs to the instance, but they are older, and as Wishgranter noted a while ago, you lose a lot in GPU and CPU performance on the cloud generally, especially when talking about AWS.

I helped design and managed some small keplar accelerated GPU cloud networks for VDI, and it jives with the experiments I ran on those systems.

With this sort of stuff and VR stuff, the systems are SO taxed that I think you’d have to build your own render farm - there are parts of a revenue model I’m looking into that would render for other entities, but I’d be wrapping that service with 3D modeling and other workflow commitments way before I would consider opening it up for someone who wanted access to a render farm. Margins are really, really small in cloud computing endeavors, and generally the one with the most investment or capital (AWS) wins out. Right now there isn’t enough demand for these sort of systems.

I do have something I’ve been thinking about, however… NVidia will be launching a new gamestreaming service - if I read it right, it’s less their old ‘rent a game and you can play it’ as it is ‘upload your steam game to a dedicated machine that you rent out per hour’. Right now a machine (CPU and RAM aren’t listed yet) with a 1080 looks like it’s going to be USD $2.50-$3.00 an hour. Again, wouldn’t beat the machine under my desk, but might pull me out of a fire in a pinch or two…

thanks for that info on aws.

and network processing sounds good. as I already have renderfarm for 3d rendering. but I’m limited to 128gb ram locally, so looking into aws for bigger jobs.

but network processing with it splitting it up to smaller chunks that needs less ram would be ideal.

Sorry to zombie this thread, but is the general consensus still that a local machine is better than AWS?

We have an i7 in the office with a GeForce 1070 and it keeps throwing memory errors and such. We’re trying to use AWS with a Tesla. Is it a waste of time?

@Scott Beeson, was it a waste of time? :smiley:

Any updates regarding the Distributed computing possibility that i heard it will be available during this period?

 

 

Yes. It was.

It is possible to integrate RC into a render manger. 

I’ve had it working with Backburner in the past. and would probably work better with deadline.

Its not the most practical thing to do. I’m not doing it at the moment, as each node needs a CLI license.

it requires a fair bit of scripting to break the jobs up and submit them. Then you have more work at the end to bring them back together. 

but dose help it process better with less pc resources and makes dealing with crashes a bit easier.

only worth the effort if its going to take more than a few days to process.

would be much better to have this all built in. even just for the CPU part of the processing.