Hardware required

So
“RealityCapture runs on x64bit machines with at least 8GB of RAM,
64bit Windows 7 / 8 / 8.1 / 10, graphics card with nVidia Cuda 2.0+ GPU and at least 1GB RAM.*”

Is a very light specification compared to e.g. Bentley ContextCapture.

Can it be true that
“Can you make a complete 3D model from 3,000 photos in less than hour?
Yes, you can! With our software on a computer you already have.”

“Super-fast draft mode which aligns 500x12 megapixel images in less than 10 minutes on a notebook, right in the field”

“Simply take pictures, even 10,000 and press a button, it will not take days, just a few hours on a single $1,000 computer”

How come? If so, massively unique.

Hi Tom,

what exactly is your question? :lol:
Yes it is much faster that competitors and yes, most hardware demand is much lower (there are limits though).

OK, Q1 - how would my machine do, which is fine for 5Mb 3D CAD building models - W7-64 on i3-2120 3.3GHz, 4GB RAM and Quadro FX580.

I understand that I can’t even get started, with the FX580, but can easily put in used Cuda 2.1 or 3 card from ebay - anything from a £25 96-core 1GB Quadro 600 to a £304 1536-core 4GB K5000.

So how high to go with graphics card, and will the rest of the machine keep up?

Q2 - what is RealityCapture’s speed secret, compared to ‘all the others’?

For example, a suggestion of quality sacrificed for speed? from
https://forum.sketchfab.com/t/new-fast- … re/5042/30

“AFAIK RC’s speed is partly due to optimising a lot of the processing by restricting the range of some of the algorithms. These have been tuned to optimal datasets, so the photography is very important. The order of images in a cave can play a role. For some “linear” scenes that I’ve done I found that reordering the images so that they were more in order of location helped reduce the number of components.”

Q3 - an invitation to comment on:
My interest, as small-projects Architect, is to DIY my own digital surveys of existing buildings, interiors and surrounding land; to place my 3D building model into photorealistic ‘landscape’ within a .dwg environment - Autocad/Bricscad; to examine views out from the building and views in to the building; to 3D-model the building’s horizon (nearby buildings, trees, far horizon) affecting how much winter solar power hits the building.

All that for E99 per 3 months sounds good, without massive hardware investment on top.

Oooph, long questions, even longer answers? :lol:

In the forum here there is enough said about hardware - no need to go into it again.
So only some general remarks (based on my gut-feeling mostly).
4GBs is not too great, you might think about upgrading.
Most people seem to prefer GForce vs Quadro - you get much more power for the buck.

No idea.
I guess there are a lot of software developers trying to figure that out, too! :slight_smile:
One significant difference is that you don’t have manual influence on the alignment process or rather you cannot optimize it.
In most cases it is ok, but in others it would be a benefit.
I have found that there are so many different factors involved (camera, object, photographer, to name but a few) that there is no general answer as to which software is better. You just have to try out what suits you best.
In most cases I personally get better results with RC, also because its hardware requirements are much better - with other software I would never be able to do anything as complex as with RC. Only few cases made it necessary to use other software.

Wow, that seems like a lot of effort for your purpose - I am a trained architect myself.
It does take quite a bit of commitment to get really good results, but that is true for most software.
In contrast to marketing, it isn’t just about taking a few images an - bling - you have a beautiful flawless model.
You need to learn to navigate the difficulties and most important how to take proper images.
Believe me, I am working on and off since 1.5 years with RC and not even close to being routined… :shock:

Thank you Gotz for brilliant answer. I will search the forum for hardware truth.

Unfortunately dwg format (Autocad/Bricscad) needs full-OpenGL Quadro, whereas Bentley Microstation uses DirectX so is more OK with Geforce. Sometimes I wonder if a simple desktop machine can have dual (alternative) cards.

If it all seems a lot of effort as DIY accessory to practicing architecture, but as architect yourself do you do similar, or have moved on to some other use of photogrammetry?

I do have the thought to also (or mainly) provide the service to other architects, as an ‘insider’ who really knows what might be needed, useful or potentially revolutionary.

For example, many are still on 2D, so as well as offering 2D CAD survey drawings, provide also a photorealistic underlay to same - sections, elevations, roof plan - and near-scene background to ‘as proposed’ elevation drawings.

So Q4 - after loading into Autocad, maybe via Recap, I hope a scan be viewed orthographically (without perspective), and sectioned away on a section line.

Hey Tom,

no problem! :slight_smile:

I also build quite extensive 3D models on ACAD clone and I am happy with my GTX.
But you might have special uses for it…

Two cards should not be a problem, since you can tell RC which one to use (at least for the reconstruction process).
But I think you will find enough info about that here in the forum.

Yes, I think it is too much to just do once or twice per year and then have professional results.
As I said, I am also still learning.
But if you consider doing it as, say, one third of your projects or so, then I think it would be manageable.
I would recommend at least a setup like mine though, mine is still quite basic.
For about 1000 Euros or equivalent you should get something slightly better by now.
An ideal machine can easily go up to 5 and 10 grand - I am dreaming of something like that at night… :lol:

Hi Gotz

It’s fascinating, a game changer, that you say
Götz Echtenacher wrote:

Two cards should not be a problem, since you can tell RC which one to use (at least for the reconstruction process).

when everywhere online I see nothing but doubts, little clear method suggested, except e.g.
https://forum.solidworks.com/thread/171915
http://www.tomshardware.co.uk/forum/309 … hics-cards

You say
Götz Echtenacher wrote:

I think you will find enough info about that here in the forum.

but I can find nothing.

You have been very helpful with advice - can you suggest sources a little more?

Hi Tom Foster

RC use GPUs for calculating depth maps and etc. and its need CUDA 2.0 so no need for specialised OpenGL or etc extensions ( what are always enabled btw ) and the “gaming” GPUs are clocked much higher as Quadro cards ( speed difference about 20+% ) on same GPU. ( Quadro and Gaming GPU are absolutely the same, just different naming and quadros have enabled some aditional functionalities. So from hw perspective its the absolutely same hw.

Many people use dual or triple GPU setup to get best price-performance ratio, 4th GPu give max 10-15 % speedup, as there is the PCIE-latency issue.

Hi Tom,

well, the quotes you give are about different software.
It depends on the application, if and how it can use different GPUs.
But since I have only one GPU myself, I cannot give you watertight advice on that.
I only remember reading about it a couple times and that there is an option for it.
With Photoscan for example, you can define the use of GPUs very precisely and also deactivate CPU-Cores accordingly.
Also Wishgranter told me once that the cards don’t even need to be able to communicate by SLI or anything like that.
Maybe he will drop by eventually and confirm or correct me.
Hehe, I just noticed he did it already while I was still typing… :lol:

One example for a different post:
2x Geforce GTX 1080