Need guidance regarding lighting for large engineering models

Hi, I work for an engineering company, and I am testing out some of our models in UE4.18, with the intent to use them for VR demonstrations with SteamVR. The models are large - I’m talking LNG plants, mining infrastructure, etc. I have been researching and testing various lighting, shadowing and rendering options, but it is taking a long time to do so, and I am hoping for some advice to shortcut the process. Most of what I read is suited to small scale models, or interior levels, etc, and I’m not sure how applicable the information is to what is effectively large open world levels.

So the models have thousands of rather simple static meshes, which import correctly with UV maps generated correctly. Some larger parts require the lightmap-resolution increased, to allow decent shadows. However, this isn’t working for the terrain, which is exported from CAD programs as huge static meshes. For these, I imagine I need dynamic shadows. Whilst researching how to use them, I found a UE4 stream that suggested large open worlds don’t use baked lighting at all, because all the individual lightmaps take up excessive graphics memory. So I attempted to disable static lighting in the project settings, but then I don’t get any shadows. This may be due to my use of Forward Shading Renderer, to suit VR?

So it is taking a long time to try every setting, and hopefully I can get some specific advice on the best setup. Whilst I have been learning a lot, I still don’t understand a few aspects like Mesh Distance Fields, Ambient Occlusion, Distance Field Shadows, Cascaded Shadow Maps, etc. I also don’t understand how these aspects work together?

So here are my exact requirements;

  • VR suitable models, preferably using Forward Shading Renderer, as MSAA with a high screen-percentage (200) does look better than equivalent TemporalAA.
  • Thousands of static objects, which should have shadows.
  • Huge static mesh objects for terrain, which should have shadows cast upon them.
  • Single directional light to represent the real world sun.

I think there’s some requirements in there that don’t mesh together too well.

First of all, having THOUSANDS of individual Meshes is going to kill your performance as your DrawCalls will skyrocket. You absolutely have to pack large amounts of them together or you will not get any good performance which is essential for VR. And yes, this includes rethinking your UV Maps and Materials. You can pack them together per Material, for example. That on the other hand might mean that some meshes are always going to be visible and never culled. Let’s say you give all the bolts on the machine the same Material and pack those bolts into a single Mesh. There will be bolts everywhere on screen, no matter where you look, so those will never be culled and always be rendered. You have to be smart about how you approach this.

Depending on how highres your shadows will be it’s absolutely possible to shadowmap a large mesh. In the end, it’s always about your target hardware. Again, you can split up that terrain into parts.

What most likely won’t work is 200% Screenpercentage, Forward Shading AND Dynamic Lighting. For VR to work you have to cut corners, think out of the box. Most VR projects use static lighting wherever they can because it’s increadibly cheap, once baked. Dynamic Shadows and VR are not a good idea and should be used in small scale cases, if at all.

  • Ambient Occlusion just as several ScreenSpace effects (ScreenSpaceReflections, …) do not work in Forward Shading.
  • Distance Field Shadows look nice but are pretty expensive as they are, again, a realtime shadowing technique.
  • Cascaded Shadowmaps is the shadowing technique that the directional light uses for it’s dynamic lighting. It’s like a series of shadowmaps with shrinking resolution that get blended in the farther the shadow is from the camera.

I do similar work and I am assuming you have a demo system your company is taking to tradeshows and client sales meetings and whatnot that you are targeting. Not making an application clients will download and try to run on a phone.

If that is the case, you might be making this more difficult on yourself than need be. Since this is an industrial application, not a game, consider whether you even need to use forward rendering. The main reason for using forward rendering is to get acceptable performance across a range of particularly lower end hardware like phones. If you are using a high end gpu with a big chunk of memory like a 1070 or better, and you know that’s the target system the demo will run on, optimizations like forward rendering may not be necessary. The information you are getting, especially from livestreams by Epic, is game centric. The information is under the auspice that you are trying to make something run on probably a midrange system within a couple of gigs of VRAM or a phone. If you have 8-12 gigs of VRAM or more available for this application, use it. Disregard best practices for game development and maximize resource usage until something breaks. Use every feature you can and only worry about optimization when it’s clear you need to.

For your lightmaps, you can start by editing your DefaultDeviceProfile.ini and increasing the default Max LOD on your lightmaps and other textures to 8192. You can break your terrain into subobjects and give them all unique lightmaps to further improve lighting. If you need to, you can consider combining parts of the models where it makes sense to do so (objects that are close to each other and using the same material for instance) but I wouldn’t out of the gate start worrying about draw calls and combining static meshes based on what you have described. I would probably start looking at ways to instance meshes if possible with your data set if performance is an issue before doing any other optimizations.

I don’t know how big your scenes actually are, but you don’t have to work at whatever scale they import at. You can scale them down either before export or inside the engine if the scale is causing issues. Adjust the height of your player camera accordingly if needed. In engineering, precision is important, but if this is for client demonstrations/sales, it only has to look right, not be right :slight_smile:

For lighting, again, based on the assumptions I’ve made, I would try using deferred rendering, static lighting, and making the lightmaps as large as the hardware will allow. This will give the best visual results.

Thanks for the assistance.

So are you referring to an overhead from all the DrawCalls? I imagine a mesh with multiple parts is going to take longer to process than tiny individual meshes, but if it is overhead that is being saved, then I guess it is worth exploring.

How should I group these meshes? Is it better to group objects that are close together, or group similar shapes that may have a lot of clear space between them? And will the auto-generated UV maps be okay with these grouped meshes?

Shadowmap, as in baked lighting? The problem is even with a small test model I am using now with 1500 parts, and a floor mesh that is 160x130 meters with light map res set to 2048, the shadows on the floor still look blocky. However, I have just discovered I can use stationary directional light, and by setting a value to Dynamic Shadow Distance Stationary Light, I am getting dynamic shadows up close, which fall back to baked in the background. This looks promising.

The screen percentage doesn’t seem to make a lot of difference to performance on my test model. Unfortunately the CPU (a xeon with only 2.2GHz cores) isn’t keeping up, and is doing 16ms frames according ot SteamVR stats. The GTX 1080 seems to be doing alright though. End result, it runs solid at the 45fps that the VR hardware drops down to.

My small model is using 3Gb vram with baked lighting. I guess I’ll have to try a bigger model with 10k+ parts, to see how it goes. Right now I am only using simple colored materials too.