We are unable to render our project with MRG path tracing in Unreal 5.6 on GPUs with 24G of VRAM, while rendering in 5.5 works fine. We have had some success rendering on 5090 RTX cards that have 32G of VRAM.
Has anything changed with memory consumption in path tracer? It seems our scenes even with grooms disabled are running much more memory. In this case there is a 800mb difference in the same scene rendered in 5.6 vs 5.5. I have been unable to do this test with grooms enabled, as the 5.5 scene is already over 22GB with grooms and fails to render more than 1 frame in 5.6 on my local machine. I’m trying to test on a 5090 to get better numbers.
Hi! Just passersby here who works with 4090/5090 using path tracer too - are you using Extended GPU Timeout Detection (TdrDelay) in windows registry by any chance?
Exceeding VRAM is not optimal ofcourse, but across many unreal versions and rendering primarily on path tracer, we had only rare cases of hard crash GPU/engine when exceeding VRAM. Usually it only slows down considerably when the TdrDelay is set high and we did not encounter any crashed during our heavy duty testing on 5.6 with MRG yet (holdout alpha output is totally busted on release version though).
But I have to say that since 5.5 unreal, we have actually seen somewhat increased instability and more crashes while using r.RayTracing.Nanite.Mode 1. Usually that one pointed out to ray tracing errors firing up in logs.
If you’re also seeing higher memory usage in Editor you can open the Render Resource Viewer tool in the Editor to see where the memory is going, but keep in mind all of that data is based on Editor settings including the viewport sizes in the Editor.
For some reason the attached Insights Traces didn’t contain the memory insights, just the default performance (CPU/GPU) data, can you confirm you’re launching the Editor with -trace=default,memory? This should help us determine where the memory is going.
BTW it just occurred to me that the AWS G6e nodes we use that are crashing have 48G of VRAM, so I’m not sure this is an out of memory issue.
Can you attach the log files from an instance where the render process crashes? That should give us more information about the kind of crash.