I just swapped out graphic cards and installed a 3090. I created an animation sequence for a scene Im working on and set it up in the MRQ to render it out.
When I started rendering the sequence, just for giggles, I opened the Task Manager to see how much RAM and GPU was being used, and much to my surprise my 3090 was only being utilized at about 20%. I was expecting it to be pounding the GPU for max rendering speed…
Is this normal? Or do I need to change a setting to get the max rendering performance out of my 3090?
Offline rendering might be limited on the CPU calculating the animations and physics simulation of the scene, and setting up the graphics card, rather than limited on the graphics card itself. Especially when you have a fast card!
So, look at CPU load, and if some number of cores are filled up, you’re CPU limited, which is what happens when you upgrade the GPU without upgrading the CPU.
To verify this theory, try rendering at a much bigger resolution. Ideally the load on the GPU goes up a lot, but the overall rendering time will be the same, because resolution doesn’t change the CPU load, but does change the GPU load.
It’s quite possible that the Unreal offline renderer can’t scale to more than 3 cores?
Also, is UHD the maximum size you can render at? 8K is a thing these days!
(I’m not saying you’d use the data at that size, just that it would be a good measurement comparison.)
I get errors when I try to render at higher than my desktop resolution (which is 4k), unless I switch to the “High Resolution” mode in the MRQ - but that comes with caveats in that certain PPV settings arent supported. SO I dont generally use that.