Using GPU power for CPU tasks !?

people nowadays have graphic cards in the teraflops range, that’s the computing power only super-computers had 20 years ago.
So I wonder why that computing power is not used for tasks that still use only the “slow” CPUs like the lightbuild?
For 3ds max there is a CUDA implementation. Why do we have to wait for hours or longer to get a lightbuild when it could be done much faster using the GPU’s calculation power instead or additionally?


GPU and CPU architectures differ greatly.
One of the things that are vastly used throughout tasks like lightig builds is floating point arithmetic.
While CPUs do float operations with a 80 bit precision, GPUs use 32 bit for single and 64 bit fo double precision.

A bit more insight how GPUs handle floats is described here:

Apart from that, GPUs use many simple stream processors that have limited individual capabilities but can handle a lot of data in parallel, doing the same task.
CPUs can only handle a few instruction streams parallel, but can perform complex and different operations on them.

So any sufficiently complex program, or one that cannot be sufficiently parallelized will not perform well on a GPU.

GPU rendering isn’t necessarily faster for rendering, and it has the limitation of the GPU memory which is a big problem for games where you have to load a huge amount of content into memory.

Well, to both of you I am sorry but both are not convincing arguments:

a) floating point precision and non-parlellized programs

I am sure you can have the necessary floating point operations in e.g. CUDA and have a 1000x better performance even if the the programs are seriell and not optimised for parallel. compared to a CPU. so that would be a huge gain even if its not optimised for GPU.

b) memory
nowadays system have 1/2 or 1/4 or memory on the RAM vs. graphic card. so for e.g. i have 16 GB ram and a cheap 8 GB GTX 1070. i would love to have my smaller GPU memory take the tasks that a 8 GB CPU could do but with the performance gain of…what…1000 ?

In the end it all boils down to the teraflops and my GTX 1070 has 6 of em. what does the CPU’s have?
I know that I am not going into details and optimisation here, but if you do a rough comparison…i am sure my claim is right.

Nope. they dont.

You cant view it that way. You have those teraflops only with very specific operations.

But its a different type of RAM. They dont slap the same DDR Ram onto graphic cards as you stick on your mainboard.

Where do you get this number from ?

On what grounds?

Again, there is a fundamental diffrence in CPU vs GPU.
Some single instructions on a CPU require multiple instructions on a GPU.
And its only parallelization where a GPU draws its magic from.
And not all processes can be parallelized. You cant make a baby on 1 month by getting 9 women pregnat. It just doesnt work that way. :smiley:

For the very same reasons, Tesla and FirePro cards are not that suitable for gaming.

But dont take my word for it:

GPU is a completely different architecture, it does not work the same way that a CPU does. It doesn’t even compare between different types of CPUs—for example, a 3Ghz AMD CPU will be slower than a 3Ghz Intel CPU
And 8GB is not much compared to how much memory you can easily use to build lighting as it is, and most people don’t have an 8GB card. Remember it would have to load the entire scene onto GPU memory, all geometry and textures for the entire level.