Ask Unreal Anything: Rendering | June 15, 2022 at 1PM EDT

Currently, the only way to exclude emissive materials from Lumen is to disable it on the mesh via “Affect Dynamic Indirect Lighting”. Are there plans to allow exclusion on the material level?

1 Like

Please add my vote for full support of Lumen / Nanite for VR. If it needs to be based on new algorithms / technology… so be it. But Unreal 5 without lumen and nanite in VR isn’t fully complete for us at the moment.

It should be a global development goal at EPIC to make sure that all features inside Unreal are fully usable for all platforms it supports. VR is becoming more and more mainstream. We were really disappointed to see that the UE 5 release version still does not allow to leverage the newest blockbuster new features of what made UE 5 so desirable when it was announced.

2 Likes

Lots of lights is more of a shadowing challenge, rather than a Lumen GI challenge. Lumen handles dozens of lights pretty well with its Surface Cache, which lets us cache their lighting, but the cost of computing shadows for all of those lights will still be high. This is something we are aware of and hope to improve in the future.

Question about Clouds Cost

Hello!

Q1: Volumetric clouds seems to cost a lot to render. Is there any plans in the near future to reduce this cost? Maybe you can suggest some settings that can help reduce the cost but still have nice looking clouds?

q2: Will it be possible to have volumetrics be occluded by light functions?
I understand the light functions are a post process now, but have you considered changing that? I think it wasn’t a post process in the past.

I mostly use Unreal Engine 5 for cinematics at the moment. And I have issues with ghosting that appears in contrasty scenes during movement of actors/objects. Are there any alternatives or future updates coming to UE5 to reduce ghosting besides me switching over to the Forward Shading Renderer?

Historically, UE has supported hardware tessellation for certain RHIs, however many platforms, including consoles, never supported it. The reasoning behind that decision is that tessellation for any RHI requires regular maintenance, especially when optimizing for performance. In particular, on consoles, we found that the performance limitations of the hardware did not justify the engineering effort of maintaining hardware tessellation and all of the rendering features that rely on it.

It is sometimes important to decide what is worth investing in and not investing in.

With Nanite’s ability to handle really high poly meshes we saw there was less of a need for HW tessellation considering what it is commonly used for.

To better support that path we are right now working on some ways to more conveniently and with higher quality pre-tessellate and displace static meshes.

While some of those use cases can be well covered by Nanite or virtual heightfield terrain it is well understood the power of dynamic displacement mapping. Honestly that was on my mind since the early days of Nanite development as something I wanted to eventually support in Nanite, albeit is no easy task. Research is in progress on that now, ie proprietary Nanite tessellation and displacement mapping. The hope is to create something far superior to what HW tessellation can do. That research is WIP so I can’t really say much more about it or promise anything really (research in inherently risk taking and sometimes fails) but I’ll say I am personally very excited about it and the direction its going.

8 Likes

We’re not planning to support lighting channels for Lumen, as it gets really expensive to track indirect lighting per-channel. It’s a feature that can be easily supported while you are handling lights individually but much harder once lighting starts to bounce around the scene.

A shadow casting directional light just to add some ambient light will be very expensive. It’s probably better to boost the Sky Light.

Hi,
It is a wide question, I’ll assume you want to focus on real-time rendering engineer, here are some interesting things to learn and investigate yourself (probably incomplete):

  • Learn how to submit work to the GPU. OpenGL is a bit old but still good to learn (I started with OpenGL 1.2 :slight_smile: ). I’d suggest DX11 because it is also nice and simple and still has lots of modern features. After that dx12 would be a natural next step (ray tracing, mesh shaders, etc.). If you do not want to do that, use an engine like unreal as a start base.
  • Learn ray tracing as you did is good! That is the way to go. PBRT is a great book!
  • Learn about shading: gpu gems, real time rendering books. (materials, lighting, translucency, volumetrics)
  • Learn about post process (bloom, depth of field, tone mapping)
  • Learn about animation and simulation (skeletal, particles, fluids)
  • learn about high end tech: read papers from conference: SIGGRPAH, i3D, Eurographics, HPG.

Pick a few things you like and combine them: render animated mesh with your rendered or generate lightmap for you ray tracer, etc. Try to see how you con optimize some code. Create projects and share them online for anyone to see and learn from you. That is also some sort of online port folio demonstrating your interests.

There are also lots of developers on twitter to follow and learn from.
I hope those few ideas will help you.

1 Like

By new technology if required, I mean, for example, making a parallel " nanite VR " plugin for UE5 that would encapsulate a sub-set of the full nanite implementation… That would be better in the short term than having nothing to work with…

1 Like

Mesh shaders are already used by Nanite for larger triangles on hardware that supports them. WPO support is coming to Nanite which doesn’t conflict with the mesh shader support we have already.

If you are asking do we plan to exploit mesh shaders outside of Nanite I believe the answer currently is no. Instead we are going the opposite direction and trying to support everything in Nanite, bit by bit. That is a very long road so please don’t read into that as everything can be Nanite soon but that is the long term vision and where we are investing our resources.

1 Like

Awesome to hear that! I’m sure you guys will come up with something to take it further! Thanks for the update! :slight_smile:

Thanks so much for taking the time to provide this thoughtful response; I’m glad to hear at least that this is not an afterthought and that some work is going into bringing lost use-case functionality back to the engine <3

To better support that path we are right now working on some ways to more conveniently and with higher quality pre-tessellate and displace static meshes.

Research is in progress on that now, ie proprietary Nanite tessellation and displacement mapping.

I am particularly excited about both of these, and especially #2, since this would be a huge leap forward for us 3D artists in terms of easing the workflow between external softwares and UE. Have a great afternoon :smiley:

Lumen does support GI and reflections on translucency, but not translucency seen in secondary rays. You can still make emissive additive translucency to fake volumetric inscattering (fog sheets).

1 Like

Yes, although I can’t comment on when. When DirectStorage is well supported by the UE file system Nanite and VT will be first in line to make use of it.

4 Likes

Are there any plans to integrate color and luminance histograms similar to Davinci Resolve inside of unreal? Current work flow involves take screen shots in unreal and then bringing those into resolve or photoshop to check the values using their histograms.

1 Like

Read the Physically Based Rendering book for an understanding of the fundamentals.
Then write your own renderer, implementing basic versions of each effect, on a modern graphics API. Once you get the basics working (lighting, shadows, AO, bloom, deferred shading, PBR, foliage, terrain, etc), then build an understanding of where the GPU cost is going and try to optimize. You’ll learn a lot along the way.

1 Like

In regards to mobile rendering, specifically for VR in Vulkan.
Are there any plans to bring mesh distance fields to mobile? Using them for those soft distance field shadows on static lighting is really nice in SM5, would bring ALOT to Mobile VR Vulkan projects.
Thanks!

1 Like

Q1: What is the planned/suggest replacement tool for the deprecation of tesselation? More specifically, in the terms of open world (big) heightmaps. Is it Virtual Heightfield Mesh? :thinking:

Q2: How many people are working in the rendering team? :slight_smile:

So far we’ve been focusing on fixing those issues which make Lumen GI too dark indoors compared to the path tracer. We still have a lot of over-occlusion with small windows when using Software Ray Tracing and foliage. The Local exposure feature can help here too.

I do think we’ll be adding some ambient lighting controls for the problem you are describing, like allowing the skylight to leak in to a controllable degree (similar to how skylight + AO works). Thanks for your feedback.

2 Likes

Hi,

Q1:
Just in case you missed it, some tips are there Volumetric Clouds | Unreal Engine Documentation .
Cost will be very high if you execute too many texture fetches. Try to reduce those as much as possible especially for low end platforms. Use compressed texture format. You could try also to use the cloud shadow map instead of secondary tracing (secondary tracing is a lot more expenssive), if that works for your use case. Try to use the conservative input (thus must be a cheap evaluation to be valuable) in order to avoid more expensive lighting evaluation down the line. And reuse that input when evaluating your material.
We do render clouds and the storm using volumetric cloud in Fortnite at 120Hz full screen (with some simplifications) so it is doable.
We have other optimization ideas in the pipes but no ETA as of today.

Q2:
For volumetric fog: this is supported. We bake light function as 2d item in a simple atlas. It can be enabled with r.VolumetricFog.LightFunction 1 in UE5. Adds light function baking overhead.
For volumetric cloud: could be achieved by following to the same process (sharing the atlas) but has not been done so far. Also the light function for the atmospheric light will likely need to encompass be large area I imagine.