If you are in the mood of testing, you could try and test the plugin FluidNinja with PT, as it is currently free for the month (i´d say, everyone should have this plugin ^.^)
We used live before but dropped it.
Though since seeing the latest update I am deffinitly getting back. It is a steal for free.
I made some preliminary tests with PT and Fluids: just like jblack said it is not giving good results.
I’ll report when/if I do some other tests and find out something.
Thanks all!
I tried to download it and it threw some strange verification error and crashed my Epic games client. I’ll try it again when I have the opportunity, it looks awesome.
If you’re just a team of two, here’s what I’ll suggest: the newest build of UE5 on _main has a ton of fixes and supports for RT and Niagara fluids, and fluids worked excellently in my testing. It also includes PT support for decals and light functions, and it essentially makes PT capable of handling all unreal engine content.
How is groom working for you? When I tried it with Echo, it functioned perfectly, except she gained a strange blonde highlight that seemed to be a product of the hair shading model. It looked realistic, more so than the raster view.
I’m not from a programming background myself, and I still found building UE5 to be relatively easy once you get the hang of it. The file size can be massive initially, but you just need to know what to delete and it can be pretty lean. LMK if this is a route you’re considering
We haven’t messed with grooms much. We (read my coleague) have made these hairs and a hanging skirt wit xgen and imported as grooms. Nothing too fancy.
The result shown is working for us so far and so we haven’t made many iterations. (And characters dont have hair )
So far:
-In PT the grooms just look like a glow. Might be the shading problem you mentioned. Think it has happened before in lumen, not sure.
- In Lumen we had a bug where the grooms revealed translucent objects through meshes. It was almost like they were acting has an alpha mask. This was the worst and we ended up having to remove grooms for that project (other reasons weighted on the decision)
-In Lumen sometimes they wouldn’t render. Haven’t quite figured out why. Isn’t happening anymore.
-In lumen they sometimes lost all light. That was in a earlier version of unreal, it hasnt happened again.
So overall, from our limited experience it is working well. As with everything there were bugs but so far we have it working.
I tried building unreal from source before but failed miserably The few tests I did with PT sofar and your info were inspiring so maybe sometime in the near future i’ll try to build it again.
Tell me one thing: how much longer do your PT renders take compared to lumen?
A 20 minute render in Lumen took 1hour in PT, albeit with no fancy console variables and not that many samples. I was expecting way more tbh. This seems very manegeable considering there are around 40 to 50 lights in that scene…
The grooms are glowing? Like an emissive mesh? That’s interesting. I’ll try to get some reference for the hair behavior I saw to elaborate on what I noticed.
Hair acting as an alpha mask seems like a very particular material error, and something I’ve never encountered before.
Hair not rendering in lumen, that is strange. Do you mean hair not receiving indirect lighting, or something else? Was it leaking or over-occluding?
If you can walk me through what you did to build (Does Unreal Forums have a DM feature? can’t remember), I might be able to help. I’ve built UE4, UE5 and NVRTX, so I have an idea of some of the things that might go wrong.
PT renders are generally very dependent on your graphics card, resolution, and scene complexity. I have a 2070s and render at 1080p, and usually in open world scenes.
Just as a benchmark, I cued up UE5’s Content Examples starter scene. 1080p, no TSR, Lumen at near-max settings with hardware ray-tracing in reference mode (extremely high quality, bad real-time performance) the worse I could get the performance to be was 26FPS in fullscreen (this glossy wall was the most brutal performance in the scene).
To clarify, do you mean 20 minutes total for the render, or 20 minutes per frame?
Even if using MRQ, 20 minutes is a lot of time in lumen. I’ve gotten it as bad as two seconds per frame for scenes with a fast-moving camera and alpha effects (rainy street scene). I didn’t try rendering that scene with PT because it was dedicated to exploring lumen (file has since corruped :().
To answer your questions on the PT: if using MRQ, try to cull absolutely everything that isn’t in frame or visible in reflection. The fewer items PT has to evaluate, the faster it will run. Sky and sun lighting is very fast to evaluate, and emissives work but are noisy for a while.
I generally don’t use 1000s of samples, I’ve found (using PPvolume settings) 400 samples plus the denoiser tends to yield satisfactory results.
This is why I suggest building the new version of unreal engine if your project scope and pipeline could allow it (I recommend this because a team of two could put up with _Main’s occasional frustrations better than a larger team). The path-tracer just got support for temporal denoising in addition to their spacial denoising, which according to the release notes (been trying to build for the past few days) helps massively with maintaining fine details.
By using PT’s new spaciotemporal denoising in the MRQ, you could get away with a much lower sample count, which could trim your ~hourlong render times into something a little more brisk. Helps with iteration.
I want my interior GI scene to look as realistic as baked lightmap GI.
is it possible to bake probes or voxels of DDGI tech(including Lemen) and mix baked GI probes with real time GI probes?
Not to my knowledge, although I might say that is an interesting workflow decision, depending on your project requirements.
If you’re going for absolute realism for something offline, the path-tracer’s the way to go.
If you need extremely high-quality lighting without dynamic geometry or lighting, GPULM with RT reflections is the way to go. You can even get away with small bits of dynamic geo using SSGI and DF indirect shadows, which interpolate between lightmass probes to support indirect shadowing of dynamic objects.
Lumen however is an entirely dynamic system. It disables static lighting entirely (which is also a shader optimization). It uses screen-space radiance probes backed up by world-space probes in a clipmap around the frustrum. The way it handles lighting is screen-dependant, which means it’s challenging to map lumen lighting to baked lighting.
I suppose I’m curious what would make you want to mix dynamic GI technology with baked lighting, at least given the specifics of the tech (Remedy does however do something similar in Control, where baked voxel GI is supported by a ~1 meter RTGI and RTAO trace).
This is unrelated, but @Yaeko you might want to check out UE5_Release_Engine_staging, a new feature came in for Distance field stochastic semi-transparency for foliage that reminds me of an idea you suggested to resolve some occlusion issues with SDFs several months ago. Unsure if you had any hand in this in directly, but I’m very excited for the feature set and happy you offered the idea.
What about having the probes/irradiance-caches computation offloaded from the device, and set-up a server that streams cache updates to the device at some frequency lower than real-time, and then on the device only do the part that queries the caches for interpolation and lighting? How feasible would that be? Any resources in that direction?
Movie Render Queue Has Panorama Option But It is Not working in 5.0.2
Also it is not working in Latest 5.1 (GitHub)
TwinMotion is based on Unreal and Has Good Panorama Export Quality
Now that is an interesting question, If I understand it correctly. You mean performing the probe ray-tracing on a server, streaming the resulting probes to the client device, and then performing the final gather interpolation on-client?
That’s a fascinating idea, although I’m a little thrown by it. Since Unreal engine already supports pixel streaming, I suppose I’d be a little curious what the advantages of partial virtualization are over total virtualization. I also wouldn’t know the implications for cache or larger bandwidth performance if you’re moving the sizeable lumen atlases in-and-out via a network connection.
This would also have other implications for lumen behavior such as screen traces contributing to the screen probe gather, but that would depend on strictly what part of the pipeline you’d be offloading. I think Wright’s presentation on Siggraph might clarify a few details on lumen’s architecture for you ( I sure loved it), but I don’t quite see what problem this paradigm would solve?
Thanks for the quick response - much appreciated!
I’ve kept a close eye on Lumen for a while and even before 's presentation I went through his Siggraph document. From my understanding Lumen takes a hybrid approach about irradiance cache where long distance traces use a more traditional fixed grid probe layout in world space and then the second one uses layout that’s adaptive to surfaces on the screen. My question was more relating to the former.
The idea is that if there’s part of the computation that is both more expensive in terms of memory and/or compute but also changes less infrequently and produces smaller amount of data as it’s output, then that portion of the work can be offloaded and delta-streamed.
And with that in mind, the world-space cache seems like a good fit. I think of it more as a 3D grid of discrete data points than a monolithic 2D atlas (though I know that’s how it typically gets stored), so most areas in the 3D grid may change very infrequently, so they could be streamed individually such that the whole atlas updates incrementally.
Here’s a timed-link to OTOY’s 2021 presentation that shows what “seems” to be what I’m talking about:
They don’t go into details, but it suggests that there’s an irradiance cache offloaded to their RNDR distributed network but then the device uses that cache for lighting when rendering locally. I may have misinterpreted what they said they’re doing though, given how light on details they were. I did reach out to OTOY as well to clarify.
For that kind of a use-case, screen-traces would be very limited as there’s only a single object in view that would need them, and it’s screen coverage would be relatively small. If screen-traces density and/or accuracy could be tuned down that could also help for rendering on the device.
Then for the lighting from world-space probes, that would update only as the scene representation around the object changes, like in the video when an occluding object comes in (the hand) or when emitting surfaces change (the laptop screen).
Given the small 3D area of interest in this use case, the 3D grid may even be able to be tuned to a higher density while still keeping the incremental update stream bandwidth relatively low.
That is a quite interesting idea, assuming I understand it to a good extent. I think what I’m first trying to understand is the use cases for this: if you’re talking about AR, then the lighting pipeline involved would be somewhat unconventional compared to the ones strictly virtual games utilise. I’m not particularly familiar with AR lighting methods beyond some true basics, but I’m curious what you mean by ‘producing a smaller amount of data as its’ output’. By output, do you mean a scene’s radiance? Do you mean a specific buffer? I’m just trying to do my best to understand your question.
As for AR and the use case, is basically what the video shows - there’s some “invisible” representation of the scene around the object being tracked, which is then expected to illuminate it. Side-stepping the question of how that scene representation gets generated, for just rendering, the question is then how to make it illuminate the tracked object such that it feels like it belongs in that environment. That’s where GI comes in to meet that need. An approximation could be just an IBL/Env.-map (i.e: cube map) but that may not suffice for some criteria of believability. There’s also soft shadows, soft reflections and emission - all of which can be accounted for by a good GI solution (assuming the generated scene representation is adequate, but that discussion is out of scope).
From my understanding, the idea of caching irradiance for rendering - no matter the specifics of format/representation of the cache - is that there’s 2 parts:
The first part generates the cache by actually tracing against the scene (whatever that may involve in terms of representation and tracing approach) but instead of starting from a raster image for the primary rays, it starts from multiple “origin points” in the scene. For each point it then accumulated radiance while tracking directionality at some granularity level. The “output” is some representation of the cached radiance on a per-origin-per-direction basis.
The second part is rendering an image from a given perspective where instead of tracing against the scene directly it “traces against” (or “collects from”) the cached radiance data with some interpolation between them.
The first part can be arbitrarily expensive at arbitrary accuracy and may change infrequently.
The second part can be more manageable cost-wise and so can change more frequently (per frame).
The idea is then to separate the 2 across devices, where the expensive and infrequently changing part happens somewhere else on a more beefy device while the cheaper and frequent part happens on the low-end device.
I’m deliberately avoiding specifying any more than that, just to get the main idea across (assuming I understand Lumen’s architecture sufficiently adequately) , as to leave room for implementation/design decisions to fall where it optimally may for Lumen.
Hope this makes more sense now - thanks for engaging with my ramblings
I am happy to! This is a very interesting problem. You are correct in that IBL/Env map serve as a common and cheap way of lighting, and it does break down usually as the mapping doesn’t account for self-reflections and occlusion.
What you’re saying reminds me of an old paper I read on how spherical harmonics were used to light Avatar (2009), as well as serving as accumulation buffers for many modern RTGI systems (EG Quake 2 RTX). What it sounds like you’re articulating is essentially how generating radiance is a separate operation from evaluating it to a scene, which is true.
It sounds like you’re essentially describing asynchronous sparse radiance computation on a server, and streaming the radiance probes back to the client device, if I understand you directly. The question of what data would need to be exchanged to maintain a scene representation to compute radiance against is interesting, but the core concept seems entirely valid.
In regards to lumen: my understanding is that the radiance probes are all unfolded into octahedrals and packed into an atlas that is streamed in and out. That radiance is accumulated on the probes for very stable distant lighting, but I suppose what I’m wondering is what job the client device is doing in regards to GI. With SH you’re just evaluating the precomputed radiance transfer, but with the octahedrals, you still have noisy data that needs to be filtered and integrated with gbuffer information, not to mention screen-traces. Would the server-side computer need to render viewport-specific information? And if so, wouldn’t pixel streaming then make sense again?
I understand this is a very specific idea and my apologies if I am understanding it. I’m very curious about your ideas, and feel free to PM me if you’d like to keep discussing this without filling up the lumen thread.
Yes, so my though was that given how Lumen has these 2 overall components to it, one is scene-dependent but view-independent and the other is view-dependent, then that’s the natural place for the client/server split, so anything that’s view-dependent would be computed on the device, and anything that isn’t would be on the server. I realize that in Lumen’s case there is irradiance caching and tracing work on both of these, but in the specific use-case I’m describing the client-side’s accuracy/detail-level could be toned down for lower-end devices.
It is similar to how ARKit is doing things with light-probes from light estimation based on the live Lidar data, but then that’s expected to be consumed by a 3rd party renderer of an engine for lighting. It updates that light-probes data very slowly, so there can be 3s lag (i.e: for reflections).
You may also be able to follow this thread I’ve started with OTOY, which recently got a response from Jules:
https://help.otoy.com/hc/en-us/requests/213301
I’ve added some follow-up questions along the lines that you’re asking to get more clarity, so we’ll see what they say, but it does sound like what I suspected they’re doing there for that demo is along the lines of what I’m describing (they did say they’re using irradiance caching, though didn’t go into more details than that, yet).
Ok, I found a bug, once again with Lights.
In Image one, everything works as it should, but I moved the lasers (right side) a bit, so that they dont get reflected behind me, and dont go along the wall on the left, and subsequently appear on the left side of the image:
Result: Everything fine:
But… if I put them, where they belong, lumen completely loses its stuff:
This isnt the first time I observe this, but a very visible example.
Lasers are made out of:
- Static mesh with simple material (translucent)
- Point Light, with diamete/length
Thats it, nothing special.
Lumen only breaks in areas that are affected by the light of said point-lights (lasers), but works completely fine on the other side of the wall, as can be seen in the next image:
Another perspective of the issue, same order as the first two:
Working fine (no reclections → less point lights):
And, with the lasers being reflected (more lights):
I mean, I can work around this, now that I know what is causing this, but still, a bunch of point lights shouldnt “brick” lumen, therefore I would be very grateful if this would be fixed.
EDIT:
Found out some more:
- the appearance of the issue depends on the length of the point light/laser
- the amount of “breaking” depends on the amount of lights.
3x amount of lasers, short distance, still fine:
3x lasers, long distance, broken again:
and, on top of that, lumen now also is broken on the other side of the wall, which means, that more lights can break things even more than initially found:
Something, is broken with long point lights.
Possible Workaround (for now), use multiple smaller lights for the same distance, since this issue does not appear with short lights.
EDIT:
I also can verify, that this is caused by the point lights.
If the point lights are disabled, the issue is gone:
EDIT:
Further, I can verify, that the issue is gone, if I use 4 point lights of smaller length instead of one long… which is suboptimal, but proves my claim:
Hello everyone,
I thought to give my question a shot here in the feedback thread - maybe someone can join discussing this and help me.
Are there more settings to speed up the update of Lumen? In the post-processing volume there is Lumen Scene Lighting Update Speed and Final Gather Lighting Update Speed - the bars show that setting it to 4.0 is maximum, but it is possible to type any number beyond that - does it mean it can go higher than the value of 4.0?
I have an archvis scene with multiple cameras and I am quickly switching the view target to one of those camera, take a screenshot and move to the next camera (the scene is also changing in objects). I put a delay between switching camera and taking the screenshot for Lumen to settle (sometimes I see residual light from a previous rendered view for a few frames), but I would like this delay to be as low as possible (ideally I want to take a screenshot immediately). Is it possible at all? I do not care about the FPS being stable or the scene running smoothly, I only care for a good screenshot output and getting it as quickly as possible right after switching view or changing some objects in the scene.
Beside those settings in the PP volume, I also use cvars:
r.LumenScene.Radiosity.Temporal 0
r.Lumen.ScreenProbeGather.RadianceCache 0
r.LumenScene.FastCameraMode 1
Thank you!