Choosing a GPU

The P5000 is a bit different than a gtx1080, mostly about the way the internal software splits up the hardware.
I think part of your confusion is thinking of the GPU as something you connect to a monitor and draw pictures with.
You can do that, but what RC and other programs are doing is actually using it as a little super fast compute appliance.
Dump in Data and programs, hit go, and it dumps out results! This uses the CUDA cores/programs to run a program.
Generally a CUDA program running in one GPU can’t access the data/memory of any other GPUs in the system. Generally.
SLI (Scan Line Interleaving) was about loading the scene into several GPU and splitting up the rendering work. Still no data sharing, just both doing pretty much the same thing with the same data (scene).
For CUDA processing, you just have to think about the CORES (how many, what speed) and how much space they have for the data and programs to feed the cores. If you have a fast and a slow card in the system, they will just do parts of the job at whatever rate they can handle. If you have a new and and old card, it may be harder since each might support different CUDA instructions
(Nvidia calls this “compute capability” and the pascal chips are CUDA 6.1 - see https://developer.nvidia.com/cuda-gpus for comparisons)
So no, having a second GPU doesn’t act like a single larger pool of memory, but you can run multiple instances of the accelerated code, each having more space to play in)
As for CPU cores - that comes down to compromises. More cores at a given clock rate mean more heat and more expensive chips. So you can drop cores and go fast, or add cores and go slow, or add more fast cores and go hot or add a second CPU and then go back and think about how fast, how hot, how expensive you want to go. There is always someone willing to sell you faster :smiley:
Same goes for things like the disks - more disks, faster disks, more faster disks, trade speed,size, connections, reliability, cost.

As I mentioned before - best to check all the steps in the process and see where you could get the largest benefits for the available investment. GPU got a huge boost driven by the gaming community and now again for the Machine learning crowd so you are able to get a LOT of compute/$, Disks got a great boost with the NVMe drives… but a lot of systems can’t realize the full potential of those drives yet (bottleneck in the PCH bandwidth). x86 CPUS unfortunately haven’t changed much which is why the serious processing stuff is being done on the GPUs now.
Hope that helps…
Jen
Speed queen… :smiley: