GPU lightmass, out of memory... hardware advice please.

Hello,

I’ve been using unreal lately transitioning from offline rendering to expand my skills.

Have a question about memory management while using gpu lightmass, I am starting to get of of memory errors since yesterday when trying to compile lighting for a scene (lighting blocking no textures)

granted I haven’t used any handmade lightmaps yet, for the sake of speed on iteration (still on modeling/set dressing stage) I’ve been exporting datasmith geo, bringing it into unreal and then putting min/max auto lightmap (512/2048), at the moment I’ve been focusing on getting lighting working with automatically generated lightmaps on geo.

talking about a triplex appartment interior, walls, floors although no uvs are detached properly (so no ultra big pieces) overall is not what I would consider a lot of geo and I have a “decent” workstation. evga rtx2080 - 8gb (no ti), threadripper 2950x , 64gig memory… etc.

also would like to mention I am using baked lighting (skylight with hdri 4k), no directional, some rectangular lights, lightmass portals, lightmass volume (tight to the mesh to optimize calculations),
and some spotlights, no textures.

I am using raytracing through for reflections/shadows, and start to have the compiling issue while clicking on raytrace refraction bounces as I noticed glass was rendering dark when two glass doors where one in front of the other (maybe someone has a tip on this?)

So all this being said, I am thinking that because I am calculating lightmass I am using 8gb of memory in the video card instead of the full system memory, 64gb with cpu? I will of course optimize the scene when time comes for uvs, materials etc, but surprised to have a run out of memory message all of the sudden. (everything else in the computer is closed, only unreal running).

any advice would be appreciated I switched to gpu lightmass for speed sake, but memory bottleneck is starting to become an issue, not sure if a quadro rtx 5000 would be worth.

cheers

If you’re using ray tracing then you don’t need to use lightmaps, ray tracing is completely dynamic so it’s rendering the image every frame.

If you’re using GPU lightmass (where it’s baking lightmaps and it’s accelerated by your GPU) Then it’s limited by your GPU memory (8GB) and it has to load all of the geometry and textures (including the lightmaps it’s going to render). So it’s very easy to use up the GPU memory that way.
If you don’t use GPU lightmass then it can render with the CPU and system memory, since you have 64GB it should work fine that way.

Thanks for the answer Darth viper, I know raytracing calculates dynamically, but is it possible to have a hydrid scenario mixing lightmaps baked (static skylight, spotlights) but have also raytracing reflections and raytrace shadows?

I am not sure if I want to go fully raytrace GI yet, I need high quality in the gi to be constant and optimize a bit for potential VR.

Fair point about the memory in card thats what I was assuming… uhmm at this point note sure if its worth keep using GPU lightmass with that memory limitation, it kind of sucks, specially knowing I have a decent threadripper and enough ram memory.

You can bake shadows and use ray traced reflections. I’m not sure you can use baked indirect lighting and ray traced direct lighting,

thanks, I am trying to find a performance balance and avoid fully raytrace GI for the time being baking static skylight, given that final output is aiming for VR with oculus. I am using raytrace reflections indeed, and try to cap them at 0.2-0.3 then switch to screen space reflections for rougher ones. As for baking shadows, you mean being able to bake “raytraced” shadows?

uhmm I’ve been baking skylight (static) + spotlights (static) + rectlights (static) using raytrace reflections enabled and raytrace GI disabled. Still doing test and trying to find a good balance for optimization/quality.

I just read through a new post regarding RT refractions and reflections in the Rendering forum. Check that and see if there’s a hint to answer to the problem with darkening of double glass / thick glass surfaces. It goes into brief detail about how to get realistic looking results, but might give you ideas as to how to get rid of the darkening issue.

https://forums.unrealengine.com/development-discussion/rendering/1709750-ue4-24-implement-fix-thick-glass-refraction-and-double-reflection-in-ray-tracing

@preston42382 thanks for that link!! will take a look cheers

I dont have the exact info right now, but i had a similar problem. I had to make a shortcut from the editor, and in the launch cmd’s inside that shortcut edit it so that it loaded the editor and built the lights, without actually opening Unreal (As the editor uses up a lot of resources just showing you the image).

Try searching the big GPU lightmass thread for this.

https://www.youtube.com/watch?v=ml06krbbaAA

on that point, I was watching this presentation the other day, if I am not wrong seems like they are doing so in some of their examples.

I am looking for great quality while doing realtime with decent fps, going fully raytrace GI although would be an ideal scenario doesnt seem like the best quality/speed/performance combination for a real time experience, specially for VR (oculus, etc) which is my goal atm with my tests. Some examples I’ve seen using Raytraced GI still have noise and do not look 100% clean unless cranking thing up a lot which doesnt help with performance. If I was aiming to do just a cinematic or some high end product viz maybe I would consider full GI, definetly cool to see and worth testing, I havent played much with it, only raytrace reflections/shadows.

Hey Jamie, thanks, I’ve look the thread a bit, it is quite loooong! but lots of good information. I actually did a test yesterday and reimported datasmith, although this time I changed, lightmap generation (automatic still for the time being) to be min 512, max 512. And then once Imported I raised to 1k, 2k depending on the geo (higher on closeup walls, all detach so only higher res lightmap on front (camera looking) faces, all smaller or back walls to 64. Did this and I saw a dramatic change in speed calculating the lightmass with GPU, from being two days stuck at 65% to calculating with production settings, indirect bounces 40, lighting quality 10 in around 1hour, not bad in comparison!

I am looking at the performance and hitting around 5.5 to 6.7gb of memory from the videocard RTX2080 8gb. Meaning that once I start doing lookdev and textures it will probably hit the memory limit, at that point I am hoping to truly optimize all the geometry and better plan lightmap assignemnt (for now its all automatic import as mentioned).

Which makes me think that I will need a better videocard soon. RTX2080ti seems like the standard around here, should’ve bought that one to be honest, but didnt know about gpu lightmass at that time and 8gb didnt seem too bad given that I have 64gb memory ram.

Will consider titan RTX… although expensive 24gb seems decent. RTX quadros not so sure yet, I ve heard they are not as fast calculating lightmass gpu maybe this isnt accurate (although in the video posted above they do use a quadro rtx).

If you’re doing VR I wouldn’t do ray tracing at all

Thanks for letting me know, I haven’t done any raytracing for VR yet but was thinking to do so. Have you tried ? I am assuming it will be very demanding plus client will need an RTX system to run the content w/raytrace reflections and VR.

On the same presentation above at around 15:25 they show a switch to change from raytrace to screen space reflections, I am thinking that could come handy to build a blueprint that can toggle between those scenarios depending it is on a touch monitor or VR for performance sake depending on the output/equipment.

I’m not sure that ray tracing is supported in VR at all, and if it was it would require a lot of power, and if you have something you’re building for a client then they’d need to have the powerful hardware to run it as well.

thank you, thats “good to know”, thought raytracing was supported on VR. Yes, that is a problem sometimes to get that kind of equipment on the client side. Probably best to keep raytracing for cinematics, highend stills, and touch/monitors and keep VR with screen space. Although raytrace in VR would be awesome to see!

So a bit late to this discussion but am struggling with the same question. I need to build 2 new workstations for unreal soon. And I get the difference between geforce which is meant for gaming and titan which is meant for production (more stable). Even seen de8auer waterblock a titan rtx :). But this point your are touching here is quite interesting. Because as far as I know VR glasses or mixed reality devices like HoloLens 2 are just that a device like a monitor and thus don’t have to support raytracing. The support for raytracing has to be in the rig you hook the devices on to not in the device it self. Or am I missing something here?

Titan RTX is the top Geforce card, you might be thinking of the Quadro cards which are workstation cards and aren’t optimized for gaming.

Most VR headsets are wired to a computer and essentially act as a monitor. But for VR you need high resolution and you need high framerate, and even the best cards that support ray-tracing can struggle to maintain 30fps.

Hololens is not like that at all, it runs off mobile hardware that is built-in, it’s designed to be completely portable. The Oculus Quest is the only VR headset that’s designed to be completely portable too, it runs off Android hardware that’s built-in (though you can now hook it up to a PC using USB-C and run it like a tethered headset so you can have better quality).

And again, if you’re building something for a client, then they need to be able to run the software so unless you have some control/input on what hardware the client is using then you can’t give them something that requires really expensive graphics processing.