Does Path Tracer in Unreal Engine 5.3 use both CPU and GPU?
To my knowledge Path Tracer only uses GPU, if that is correct will it be able to use both the CPU and GPU in the future?
The PT depends on hardware-accelerated ray-tracing, so no CPU path and no plans for CPU path. Now that essentially all GPU hardware of the past three generations+consoles supports hardware RT, there isn’t really any incentive to support CPU solutions.
ehh… yeh… there’s no point to implement cpu pathtracing. it’s just too slow.
the cpu is used for creating the structures for the gpu todo it’s work, but it’ll never keep up actually rendering. end of topic.
I understand its too slow, but would it help when there is not enough memory on the GPU to Path Trace a scene?
Also if both are used wouldn’t that help in speeding things up? Even if the speed up is not a profound as adding another GPU.
Also i saw in a public roadmap that there would be a hybrid path tracer using both CPU and GPU.
To build off of that, recent github updates point towards more and more RT/PT behavior being offloaded onto the GPU, and the architecture is moving towards removing CPU bottlenecks as much as possible.
Sort of; In a technical sense, that would be useful for certain very intensive cases, but between massive PT payload optimizations that have happened recently, nanite cluster streaming for the PT, world partition behavior and just general efficiency, the cases where you wouldn’t be able to optimize/stream a scene to fit into a reasonable amount of VRAM are perilously small. Perhaps if you had tons of volumetric effects like fluid sims, skinned geometry, and massive vistas, but even then you could easily optimize your scene to cut that down.
That’s a somewhat complex question to answer. I’m no graphics programmer, but I could see doing that posing an incredible challenge in a real-time architecture like Unreal. Since you’d still have to get the correct buffers accumulated in the GPU to display to the screen, you’d likely have to architect a separate GPU and CPU tracing kernel, have the CPU constantly stream in GPUScene data to its’ memory to trace against while also performing its’ normal BVH construction and other work, and efficiently move the ray payload data from CPU to GPU and back several times as the tracing itself is happening, plus other hurdles I am probably forgetting.
I know Blender did this a while ago (hybrid CPU/GPU rendering), but given how fundamentally different the architectures are between an offline (ish) renderer and a real-time game engine with high-end rendering abilities, I’m just not sure it would be worth it.
That said, if they did decide it was worth it, I would love to see a link to the post, as I am unable to find it in their current productboard.
well… in realtime rendering the cpu is busy enough handling assets and game code. the speed gain in rendering is minimal, anyway. also you’d have to do load balancing and resource duplication, tile management and all that stuff. the cpu would render a tile of maybe 128x128 and the gpu does the rest of 1920x1080. so they both deliver the frame in time.
in offline rendering time and sync doesn’t matter, but the cpu is still too slow. you have the same management todo. for no real gain.
blender cycles (for example) allows hybrid rendering. whether a cpu+gpu or igpu+dgpu or even cpu+igpu+dgpu. in all cases of the rendering being split across devices it’s slower, cause it’s gotta synchronize all the resources and you gotta have tile management.