Download

UE4 Rendering FAQ - Features

Mar 30, 2021.Knowledge

We’ve watched the video on Destructible HLODs from GDC2019. Fortnite has evolved a lot since then. Does that approach still hold up? Any new learnings since then?

  • No real changes to destruction HLODs since GDC.
  • Only merged HLODs are destructible in Fortnite (HLOD0), not proxy HLODs (HLOD1)
  • We add vertex color to HLODs, use colors to assign object ids to vertices, and in materials we use vis buffer to toggle primitive on/off
  • Geometry swap: works the same way, hide, then update vis buffer, if everything is static, you could then just show the swap pieces, allow HLOD to show geometry (merge HLOD, not proxy)

What is the recommended practice to integrate HLODs into an active development workflow where assets are changing often?

  • 4.25 onwards has optimizations for HLOD building, 10x faster depending on content
  • There are some known issues with some clustering options, single cluster per level is extremely fast. We can provide some versions that didn’t make it to 4.25
  • Recommended practice is to have a process that rebuilds invalidated HLODs, gather list of changed HLODs, then split workload across multiple machines
    • Reference script
    • What happens when a level is checked out?
      • Data is stored in level itself, so it skips updates on that one specific
      • We are in the process of trying to move HLOD data out of the level to reduce this cost

Due to our dynamic lighting setup, we are currently relying on CSMs for shadowing in order to accommodate destructible architecture. Once CSM fade out, we lack a shadow mechanism for mid-ground and background. We also lack any type of AO solution other than post process material, but this has many drawbacks. Are there any potential mechanisms that can help us in terms of shadows and AO? Any suggested solutions we can implement ourselves if no other options currently exist?

  • ES3.1 is our current minimum, we couldn’t support distance field shadows yet for mobile previously because we needed the compute shader to merge the distance fields. We want to try it, but we don’t have any performance data yet. Also a memory tradeoff
    • We feel like reducing the update frequency and quality to get it running on mobile
    • Right now we don’t have any meta to add it, but in Fortnite the tech artists did some tricks for fake shaders for shadows
  • AO - techniques to implement
    • Get other platforms distance based techniques
    • Fake it
  • Shaders walk through the same data for both, which is why neither are implemented yet
  • Is it an option to enable SSAO? (has some performance hits)
    • We implemented it locally, but has not yet been submitted, still in progress
    • Performance was acceptable, but does have side-effects: rendered after base pass so the effect is not very accurate
    • We cannot do temporal filtering since we don’t keep previous frame data
    • Implemented in 4.26

We’d like the same colors that are in the texture viewer for a video source to show up in the 3D scene without any alteration. We are aware of lighting and tone mapping, so we tried this by using the Unlit material or the Composure plugin, but we didn’t get the desired result. Any tips for making this work?

  • As for color, we don’t have a mechanism to have a tonemapper applied to a subset of the 3D scene. The way we work around that elsewhere is with Composure, which allows you to have separate layers for the media content and the 3D CG content. The tonemapper can be enabled or disabled per layer, which would allow video to have no tonemapper, while CG content would have the usual UE4 tone curve. This is the path we leverage, but if you’re having trouble making it work for your needs, let us know.

We are using “real-world” values for sun/sky brightness, which leads to physically correct surface luminosity and expected camera exposure values. These real-world values have a massive numerical range and seem to break some lighting components, are there settings that might not be obvious that we should change based on these lights?

  • Pre-exposure should be working and fix these precision issues. There were a few bugs that were fixed in 4.24 (like scene color nodes weren’t using pre-exposure) and there are some smaller ones that will be in 4.25 (skin shading specular). Pre-exposure should work consistently, but if not please report the bugs on UDN and we’ll take a look at them.

Currently building the reflection environment is editor only. What are the issues with doing this at runtime?

  • The ability to do it at runtime was removed to support SM4, but can be readded if you don’t care about SM4

What other static lighting system can we use for objects that are placed procedurally and then don’t move again?

  • Lightmass is the only static lighting system and requires an editor-time bake of a level. That said, if your proceduralism is at cook time you could probably hook a system together to generate/bake them without manual intervention.

Our dynamically lit environments look good when there is a direct light on a surface, but our interiors are very flat when relying on indirect lighting. Is there a dynamic equivalent to volumetric lightmaps we should be using? Should we place “fake” dynamic lights indoors? Are there good content examples we could look at for dynamic indirect lighting for interiors?

  • UE4 only has a few possibilities for dynamic indirect. Distance Field AO can help if dynamic skylight is a main contributor to interior lighting (not usually the case). Light Propagation Volumes can also be used, but suffer from light bleeding and again, are best used outdoors. 4.24 added Screen Space GI which could add a nice effect. Otherwise you’re probably looking at a tech art solution of some kind of faked light if you have well controlled environments.

We are using LPV for our dynamic global illumination. It works ok in most situations, but is not a complete feature, and our understanding is that further work is not in the roadmap. Are there improvements we should try to make ourselves?

  • It is correct that we are not doing further work on LPVs. We never had an internal roadmap for developing them so we don’t have a list of improvements to try on-hand, but you could likely make improvements based on having a specific use case to optimize for.

Is the new screen space GI in 4.24 a replacement for LPV? Is there a better dynamic GI solution we should be using?

  • SSGI is more of a supplement to LPV than a replacement, but you should definitely try it out. There is no better existing ‘dynamic’ GI solution that you should be using.

Does the engine provide a solution for multi-pass dithering, to achieve soft edges?

  • Such a system is not explicitly provided. There are dithering functions in the material graph that work fairly well with using TAA to fill the holes and provide a kind of transparent look. It may or may not work for this use-case. Depending on performance requirements you could probably write a multi-pass system like this in Composure.

Does Unreal have per-pixel displacement?

  • We have per-vertex displacement ‘World Position Offset’, and per-pixel depth modification (not the same as physical displacement) ‘Pixel Depth Offset’.

Does Unreal have mesh subdivision in the modeling tools?

  • As of 4.25, this functionality does not exist in UE, and there are no current plans to implement this.

How do you enable static rendering path for skeletal meshes?

  • In the skeletal mesh component details under the “Optimization” category there is a check box for “Render Static”. This sets the flag for marking the mesh for the static rendering path.

Why are “Exposure/metering mode: manual” and “auto_exposure_histogram” light values so different?

Do the light lumen values mean anything compared to real world values in auto_exposure_histogram?

In auto_exposure_histogram do aperture, ISO and speed have any effect on the lighting or is it just to adjust the depth of field?

  • Aperture/ISO/Speed do nothing to exposure as of UE 4.25. Only DOF gets affected by Aperture.

What’s the roadmap for ray tracing support?

  • As of 4.24, PC ray tracing is ready for production, stable as possible, efficient, and feature complete.
  • We are working on adding ray tracing support for next-gen hardware, but this likely won’t be production ready until UE5 releases.

What are the plans around ray traced global illumination?

In your experience with ray-tracing, and outside teams implementing ray-tracing into their UE4 games, have you seen a method/procedure/formula emerge for taking a game from standard deferred rasterized rendering to ray-tracing?

  • There is not a single method we have seen emerging. Depending on your level of expertise you can follow different paths.