Ask Unreal Anything: Rendering | June 15, 2022 at 1PM EDT

Lumen supports splitscreen in latest (which will become 5.1), so it technically works in stereo rendering. However Lumen GI can’t work with forward shading, which most VR projects use, and is unlikely to be affordable at the framerate and resolutions that VR apps require.

2 Likes

That is indeed a feature that would be interesting to get in, especially for movies in combination with MRQ as you say. There are multiple approaches that could be taken.

In short: one relatively straightforward way to implement that would be to add a RWTexture2DArray output to all the passes that you need data from when AOV output (base, a few of our lighting passes) setting is enabled for the project for instance. Could be setup on the view uniform buffer. Each of the slice would be one of your data for instance, and you can pack as you want. With our depth pre pass + early depth test enabled for base pass pixel shaders, base pass would only output/write what is needed (but careful with WPO disabling early depth test). You will have to be careful with barriers for lighting pass actually accumulating data, e.g. direct diffuse/specular, to avoid read/write conflicts. Then, at the end of a frame you would read that back on CPU and store on disk. That will of course reduce performance so only recommended with MRQ I would say.

2 Likes

Hi, thank you for this AMA! I have couple of questions!

  • I often see “Strata” in github commits. Could you explain benefits of Strata? Will Strata replace the current material system?
  • There is already Nanite WPO kinda working in ue5-main, also some commits related to the nanite landscape begin to appear. Will we have nanite skeletal meshes in the future? Nanite spline meshes?

look inside GetDynamicLightingSplit, for LightAccumulator_AddSplit.

We would like to expose way more art direction parameters, but we’re currently limited by our use of Screen traces. They sample the scene color directly so they greatly limit our ability to have settings that only affect GI. Screen traces are needed to fix incorrect self-occlusion and other mismatches between the Lumen Scene and the main view. We’ll keep working on it.

2 Likes

We do want to have a final gather targeting lower end, and there’s a partial one in there already based on irradiance fields if you want to check it out:
r.Lumen.IrradianceFieldGather 1

However it still has some fundamental problems and needs work. At the moment it’s better to switch to Distance Field Ambient Occlusion when Lumen GI can’t be afforded (And that’s the engine default behavior when you set Global Illumination Quality to Medium).

2 Likes

Hi there,

A lot of render passes aren’t working with Unreal 5 and Lumen enabled. Ambient occlusion, wireframe (material)

Is there a fix coming?

2 Likes

There’s no way to have that control at the moment, Lumen’s Screen traces will pick it up even if it could be hidden from the Lumen Scene.

I think we can solve the muzzle flash issue (and thanks for bringing that up) by detecting when a local light changes rapidly and speeding up lighting propagation in its influence. That’s something we want to do, but I’m not sure when we’ll get to it.

what are strata materials? and will it allow for possible easier NPR(non photorealistic) type rendering? or does NPR still require workarounds with forward rendering, composure and post process volumes?

are there any other future plans to more easily integrate toon kinda things without engine modification?

1 Like

Lumen supports splitscreen in the latest code (which will become 5.1), so it technically works in stereo rendering. However Lumen GI can’t work with forward shading, which most VR projects use, and is unlikely to be affordable at the framerate and resolutions that VR apps require.

1 Like

Currently, the only way to exclude emissive materials from Lumen is to disable it on the mesh via “Affect Dynamic Indirect Lighting”. Are there plans to allow exclusion on the material level?

1 Like

Please add my vote for full support of Lumen / Nanite for VR. If it needs to be based on new algorithms / technology… so be it. But Unreal 5 without lumen and nanite in VR isn’t fully complete for us at the moment.

It should be a global development goal at EPIC to make sure that all features inside Unreal are fully usable for all platforms it supports. VR is becoming more and more mainstream. We were really disappointed to see that the UE 5 release version still does not allow to leverage the newest blockbuster new features of what made UE 5 so desirable when it was announced.

2 Likes

Lots of lights is more of a shadowing challenge, rather than a Lumen GI challenge. Lumen handles dozens of lights pretty well with its Surface Cache, which lets us cache their lighting, but the cost of computing shadows for all of those lights will still be high. This is something we are aware of and hope to improve in the future.

Question about Clouds Cost

Hello!

Q1: Volumetric clouds seems to cost a lot to render. Is there any plans in the near future to reduce this cost? Maybe you can suggest some settings that can help reduce the cost but still have nice looking clouds?

q2: Will it be possible to have volumetrics be occluded by light functions?
I understand the light functions are a post process now, but have you considered changing that? I think it wasn’t a post process in the past.

I mostly use Unreal Engine 5 for cinematics at the moment. And I have issues with ghosting that appears in contrasty scenes during movement of actors/objects. Are there any alternatives or future updates coming to UE5 to reduce ghosting besides me switching over to the Forward Shading Renderer?

Historically, UE has supported hardware tessellation for certain RHIs, however many platforms, including consoles, never supported it. The reasoning behind that decision is that tessellation for any RHI requires regular maintenance, especially when optimizing for performance. In particular, on consoles, we found that the performance limitations of the hardware did not justify the engineering effort of maintaining hardware tessellation and all of the rendering features that rely on it.

It is sometimes important to decide what is worth investing in and not investing in.

With Nanite’s ability to handle really high poly meshes we saw there was less of a need for HW tessellation considering what it is commonly used for.

To better support that path we are right now working on some ways to more conveniently and with higher quality pre-tessellate and displace static meshes.

While some of those use cases can be well covered by Nanite or virtual heightfield terrain it is well understood the power of dynamic displacement mapping. Honestly that was on my mind since the early days of Nanite development as something I wanted to eventually support in Nanite, albeit is no easy task. Research is in progress on that now, ie proprietary Nanite tessellation and displacement mapping. The hope is to create something far superior to what HW tessellation can do. That research is WIP so I can’t really say much more about it or promise anything really (research in inherently risk taking and sometimes fails) but I’ll say I am personally very excited about it and the direction its going.

9 Likes

We’re not planning to support lighting channels for Lumen, as it gets really expensive to track indirect lighting per-channel. It’s a feature that can be easily supported while you are handling lights individually but much harder once lighting starts to bounce around the scene.

A shadow casting directional light just to add some ambient light will be very expensive. It’s probably better to boost the Sky Light.

Hi,
It is a wide question, I’ll assume you want to focus on real-time rendering engineer, here are some interesting things to learn and investigate yourself (probably incomplete):

  • Learn how to submit work to the GPU. OpenGL is a bit old but still good to learn (I started with OpenGL 1.2 :slight_smile: ). I’d suggest DX11 because it is also nice and simple and still has lots of modern features. After that dx12 would be a natural next step (ray tracing, mesh shaders, etc.). If you do not want to do that, use an engine like unreal as a start base.
  • Learn ray tracing as you did is good! That is the way to go. PBRT is a great book!
  • Learn about shading: gpu gems, real time rendering books. (materials, lighting, translucency, volumetrics)
  • Learn about post process (bloom, depth of field, tone mapping)
  • Learn about animation and simulation (skeletal, particles, fluids)
  • learn about high end tech: read papers from conference: SIGGRPAH, i3D, Eurographics, HPG.

Pick a few things you like and combine them: render animated mesh with your rendered or generate lightmap for you ray tracer, etc. Try to see how you con optimize some code. Create projects and share them online for anyone to see and learn from you. That is also some sort of online port folio demonstrating your interests.

There are also lots of developers on twitter to follow and learn from.
I hope those few ideas will help you.

1 Like

By new technology if required, I mean, for example, making a parallel " nanite VR " plugin for UE5 that would encapsulate a sub-set of the full nanite implementation… That would be better in the short term than having nothing to work with…

1 Like