DF shadows are very fast and likely the fastest solution in the distance. Far cascades are not required for them to work. Yes, I’d recommend them if your project already requires distance fields to be generated and in memory.
You can setup custom keybindings for any of the viewmodes in Editor Preferences, just search ‘path tracing’ and you can then bind a key and switch between lit and path tracer easily.
I use this for switching between Lit and Lumen Scene all the time.
Hi,
Strata is an material framework we use to experiment with things. So it is experimental and not user friendly for now.
What it is: it basically abstracts away previous restrictions by enabling users to create and come up with any style of material: better metal mixed with plastic, coated with colored coat having anisotropy, coat over SSS, SSS that blends with metal, better defined translucency, etc. by playing with matter (e.g. mixing, layering slab of matters in a physically grounded way). A bit like Adobe’s MaterialX or Pixar’s Lama but taking into account the constraints of real time performances.
It allows to create a lot more variety of material but of course the more your material is complex, the more it will be expensive computation and memory wise.
Are there any plans to look into a solution for shader compilation stutter in the engine when using D3D12 or Vulkan? It’s creeped it’s way into most games using Unreal, and I’ve gotten used to launching games in D3D11 mode as a result. I know that there’s a PSO Caching system (that’s disabled by default), but there isn’t really that much good documentation on it for the PC side of things, and the documentation only really goes over Android support.
What kind of work would a solution entail in regards to more D3D12 optimizations?
Hello everybody.
-
Did Lumen have any difficulties regarding vegetation processing? Has this been fully resolved or do we still have some issues in the new version? If yes, which are the most relevant?
-
How is Nanite’s real-time rendering for Skeletal Meshes? Does it still have lower performance compared to Static Meshes?
This has been super enlightening already, big thanks to the team coming together to answer all these questions!
Q: When will it support (per-light) Indirect lighting intensity?
We would like to support this for parity with other rendering modes as you said. The only challenge is that the Path Tracer currently handles indirect diffuse/glossy/reflections in a unified way. Scaling only certain paths may require carrying a bit of extra state across which can have some performance impact.
Yes, we are actively trying to improve this. It’s currently still in development phase with some local prototypes but we hope we can deliver an automatic PSO gathering system at runtime trying to precache all possible required PSOs which could be needed. The precache request is done as soon as an asset is loaded on a background thread. All possible needed PSO are collected and precached at that point in them. So this will also include cascade and niagara effects, all possible shadow and light types, … Currently we are focusing on packaged games but we are hoping to extend this to the editor as well at some point, but we have to see how things go first.
We have plan to improve refractive and translucent materials, especially when it comes to colored rough refractions, colored shadows, self shadows and the like. Caustics in real-time is one of the item from that list (but not at the top of the list right now).
I’d also recommend this book that Seb is a coauthor on!
https://www.realtimerendering.com/
A production engine and/or renderer can be really overwhelming. I wouldn’t recommend that as the first place to look but it is a great resource that isn’t crazy hard to get into once you’ve learned the CG fundamentals if you focus on just the shaders. They are a far smaller amount of code, have far less interdependencies which make them easier to understand as stand alone snippets. Shaders are also becoming the primary location for graphics techniques. In the past huge amounts of the algorithms would be on the CPU but that has flipped. Mostly the CPU is used for setup and the bulk of graphics algorithms are in shaders now.
That said there is a great amount that can be learned when you are starting out by writing your own version of it, no matter the domain. Write your own hash tables, search algorithms, BVH building, mesh importers, parsers, shader parameter reflection, etc. Write even stuff that might not seem relevant in modern era like your own basic software rasterizer. Don’t reinvent the wheel unless you want to learn a lot about wheels which is exactly what you want to do early on.
These are very frequently asked question and is very much on our radar. However as you mentioned, tracking additional outputs while conceptually simple actually greatly increases the memory footprint and amount of state that needs to be carried around which can be a drain on performance. Splitting reflections/glossy/indirect requires additional changes since the path tracer currently treats these in a unified way.
Another thing to keep in mind is that we want to prioritize realtime workflows in UE. Some of the benefits of light groups and AOVs are mainly in addressing notes that can be hard prior to having seen the image. We hope than in UE you have the ability to dial-in some of these decisions earlier in the process.
That being said, I’ll also mention that we have several in-house projects that plan on dog-fooding the Path Tracer to help us explore options in this area.
About Strata, you can read my answer there: Ask Unreal Anything: Rendering | June 15, 2022 at 1PM EDT - #96 by SebHillaire
Yes foliage is still very much a work in progress with Lumen. We don’t yet support the Two Sided Foliage shading model correctly (lighting should enter from the backside of the leaf and have SubsurfaceColor applied). We also have a lot of over-occlusion with Software Ray Tracing, which is the default. It’s something we very much want to improve, but I’m not sure when we will have those solved.
This is not expected. It may depend on the number of samples you are using in MRQ. Because the denoiser is currently CPU based (OpenImageDenoise) there can be a drop in GPU usage if you have too many temporal samples because the denoiser runs once per temporal sample.
The upcoming release will have a mode to let you denoise only once per frame instead of once per temporal sample in MRQ which should help alleviate this (as well as proving higher quality motion blur).
We have improved the documentation about the PSO cache for 5.0 (Optimizing Rendering With PSO Caches in Unreal Engine | Unreal Engine 5.0 Documentation) and also fixed a number of issues with it.
It seems like AMA might be over soon. By chance, can we please get an answer to this question ?
Would there be any possibility of a shader cross compilation module that allowed the use of Open Shader Language
Open Shading Language does not currently have a backend that targets realtime shading languages so there isn’t an easy path to doing this in the short term. Some of the design of OSL is also very much aimed at offline rendering (for examples accessing textures by file path) and not easily translatable to the concepts used in UE.
I have the exact same question and my workflow is pretty much the same. Thanks for posting this question @fizgig804
Please check for more information in this post:
Other non-Nanite geometry in the scene can take up frame and won’t scale with number of screen pixels like Nanite does but other than taking up frame time they won’t have any other negative interaction with Nanite. Non-Nanite pixels will occlude Nanite.
The primary reason to use lower resolution Nanite meshes is to save on disk space. The performance of lower poly meshes may not be intuitive or like you are used to. Commonly that won’t improve perf. Sometimes it can even be slower. YMMV.