Per Instance Custom Data is, when you use a single HISM and change material properties of the instances, per instance. Lumen cant do that, so the HISM is black in the scene, while the same HISM would work in Lumen if it wasnt created with PICD.
It still must be recalculating a lot, otherwise performance would be better.
What I would like is: if there is no change around a probe, then skip it, since in most games the vast majority of probe never changes.
Probably not that easy to do, but could bring a lot of performance.
Only SWRT, since HWRT is kinda dumb (for now - until “RT-Cores” have become the standard even in potato-tier hardware etc.).
Lumen has a bit of an issue, where it “cuts off” in the distance, and doesnt even do something, which kinda makes it very visible (especially in large interiors) and limits the size of stuff you can build, even if we now have infinite shadow draw distance because of VSM.
Still doesnt work for many meshes, but there is a workaround where you can subtract one mesh from the MDFs shape… it would be nice if we could just tell the engine: “make actual physical holes into that Mesh Distance Field” - which is possible, but needs manual intervention right now.
Lumen hates small bushes with dense leafs (if they arent floating in mid air), since it creates a dense MDF and Lumens own capabilities in “dealing with this” arent enough.
I think Unreal 5 is an engine made for the next 10 years (I read it somewhere), so I think it’s not thought for the current gen. A different thing if that they are trying to make it ‘compatible’, but not ‘native’ for this gen. For example, a 3060 it’s a quite weak GPU for this technology and its state (I wish it could be better, running GI at 540 FPS, of course, but no). And it’s not comparable with consoles, as they are closed system, with much higher optimizations.
So, for me: the current tech is almost UE4 (or UE5, but using UE4 workflows). UE5 with Lumen is something like prototipying tech. Developers can start developing with it, but to release games after some time, when the tech if affordable for gamers, not only for devs.
Meh, I am fine withy my ~40ish fps on a 3060, with TSR, Lumen, Nanite and VSMs @ 1080p. (If nvidia didnt “nvidia” the 4060, it could run it at 60… but we all know how that went, 3060 2.0)
Those “meh” gpus (60 for nvidia) never were good at running the newest stuff at max settings at their target resolution, for decades now.
Its true though, that many games waste performance for nonsense.
@Yaeko On a 3060 1080, don’t use TSR as for native 1080(as said by the inventor, it’s not meant for native resolution). USe FXAA or TAA( I personally like fxaa for reasons everyone should know by now) Keep VSM’s on medium. Lumen on High(as said by Epic targets 60fpsps) And every single post process on low or off. You should get 51fps on a busy scene, 60fps on a simple scene.
you really need to start showing your scenes, your numbers are impossible in anything that looks good or is reasonably complex. (aka: an actual game, more complex than what Fortnite is.)
A 3060 can not do this, not even at 80% scaling. (which is what I am running, hence why TSR is enabled…)
Maaaaybe, if you have zero translucency etc. in the scene, no additional lights and so on - then, maybe. (and lumen cut down even further than just “high” settings.)
But thats something I consider “unreasonable” limitations to work with.
It was completely fine, until epic messed up its performance in 5.2…
It was almost FREE (EDIT: Just checked, I pay 2 fps for the 5.0 TSR at 1440p - negligable, for the result.) to enable in 5.0, nowadays it costs actual ms for some reasons.
Idk if they stripped the RDNA2 features out of the engine, whatever they did, it did hurt performance big time. (it costs me 10 fps in 5.2 x_X… which is why I simply put it at 80% scaling and got back 20 fps, having gained 10 in total.)
Well… I still don’t agree with you. 60 series are low end gaming GPUs:
xx50, xx50ti, xx60, xx60ti, xx70, xx70ti, xx80, xx90, xx90ti… it’s the third card from the bottom (not even in the middle, which is the xx70), and I wouldn’t consider the 50 series as gaming cards. The minimum decent gaming cards are the 60ti’s (the gap is huge versus the 60 series).
Not sure how you are comparing the 3060 versus the 3080 (!), but here you can have a global idea about all GPU performances:
If you want to get 60FPS in a 3060 with Lumen enabled, I think you can wait eternally. When I had my 60 series, I was able to play quite well with high and ultra presets in games at 30-60 FPS, but I needed to low graphics in super demanding games. Unreal + Lumen can be considered a super demanding game right now; don’t pretend to execute it at its full capacity with a low end GPU, even if they are trying to make it very optimized and scalable.
This is only my opinion/advice, even if I think too that Lumen still needs to improve quite a bit.
Ok, I see your 3060-3080 comparison now. But what do you want to point with it? 70 FPS for native 4k seems good for a 3080, isn’t it?
If the 3060 is a mobile version, then it’s a super weak GPU, not only weak. And what TDP (power limit) does it have? It’s like thinking about running Crysis in a laptop during those days. A ‘low’ laptop may not be very recommendable to run our ‘nowadays Crysis’ (UE5 with Lumen).
Probably they can’t do miracles to gain more ms in those kind of cards. The only thing may be scaling methods.
using 1080p for such a comparison is not fair to the high end cards, since they will often be bottlenecked by the CPU, which in some games can even happen at 4k, but that would be unfair for the 3060 since it heavily suffers there.
The 4090 is on average 3.27x faster than a 3060(4060)… and then consider the 4090 not even being the fastest card nvidia could have made. (Now imagine what happens, if we get a (low) performance improvement of 30% for RTX 5000… or a high one of 50-60%, or… to say the unthinkable: 80% or so …
3060 and 4060 will age like old milk, and it already started.
you seem to missunderstand, that game-developers dont care what people buy. (They dont care that most people hafve 8GB cards, or less - they just yeet it and now those cards are running into issues unless you turn the settings down.)
Afaik, even nvidia themselves admitted, that they screwed up.
That card already has issues at 1080p in half the new games, and in some even with lowered settings (if you consider 60 fps the target.)
And then nvidia made a 2nd 3060, called it 4060… that just as slow as a 3060
EDIT: Did you look at some of the upcoming games, and how they literally scream “I am going to melt your GPU”? The outdoor parts of “Star Wars Outlaws” for example… I doubt even my 6900XT can still get 60 fps there, all maxed out - and that thing is ~2x a 3060. (And at that point I didnt even take into account, that - if AC Valhalla is any baseline - the game will performa a lot better on AMD than nvidia…)
You may disagree with me… but you will remember my words in 2024/2025.
I disagree. I think this is something the Epic team needs to see as they have made this mistake in their 6 billion dollar game(FN) . My post 1048# and 1044# shows how and we need micro-optimizations. For the record though, I did recently find something very interesting in the VSM documentation. 2 things that may save me the performance I’m talking about.(nope, already enabled in project)
The lumen team needs to know not everyone thinks like the casual, dlss-craving majority when it comes to UE5’s performance. This includes the Lumen team.
I have said my peace and made my case for the Engine Devs(Even the Lumen Team) .
Slapping on an upscaler and thinking we need better gpus than the current gen to resolution scale is unaccepted to one of their customers.(Me,my studio, and plenty of gamers included tbh)
They get plenty of sugar coating from casuals. I have to balance that crap out since no one else will.
We should just entirely stop abusing the Lumen thread for something this nonsensical (even if related)… I would actually prefer it to be used for Lumen again, not for "Epic, cant run lumen at XX fps.
To go against the grain here, I would like it if the team would actually lower the performance even more, so we could have even lower frames per second
For quality reasons of course.
This seems aligned with Epic interests. They seem interested to push Unreal for films, but in order to get Unreal to replace offline renderers - a higher quality is needed.
One could reply that I can already do that with MRQ, but in fact no matter how much you push the settings and the CVARS - even if you get 1 frame per second - the quality hits a ceiling. For example you cannot get high quality or sharp area shadows generated by secondary bounces. You can only get very blurry and low quality ones.
And the Pathtracer is not a good replacement for Lumen. At least in the shape it is now.
And Lumen is almost there, we need just a few more improvements.
Fingers crossed for that surface cache-less version being a work in progress, I very much like the surface cache but it’s going to take a while before it gets into its peak quality wise; I’d much rather brute force, even if frames are going to be terrible, upscaling should help a bit and hopefully with further TSR and other upscaler improvements, a surface cache-less Lumen would fulfill all the quality head’s needs until SWRT catches up (I am praying for a caustics implementation sometime down the line!! Even a simple approximate or faked caustic shadow overlay would be awesome, like how those minecraft shader mods do colored shadows)
I am seriously in agreement with you there. While I do really enjoy lumen’s role as a game technology, I think I’m starting to see the incredibly utility in having something that can be ‘interactive’ without needing all of the shortcuts that game lumen has. Baking things out with MRQ has given me pretty astonishing results.
I’m not arguing with you, but I’m curious if you have an example of the phenomenon you’re talking about?
When I’ve compared maxed-out lumen vs. PT output, I agree that there is optical behavior that lumen simply isn’t generating, but in my experience lumen has two main breakdown points:
Contact shadows. Maybe this is unavoidable, but even with the HWRT version of their contact traces system, lumen can very much have a ‘painted-in’ appearance to the fine shadows in corners and crannies.
This is a slide taken from the lumen SIGGRAPH presentation last year. It illustrates their different light transportation methods by range, and IMO the area lumen struggles most profoundly at is between contact AO and their screen-space radiance probes. Where I see the path-tracer consistantly outperforming lumen is when there’s detail that’s less arbitrary than AO shading, and still smaller than what the (very very downsampled) screen probe gather can resolve).
Specular noise and resolve. I’m absolutely beating a dead horse here, anyone who’s seriously worked with lumen+all the engine devs know that lumen reflection is very unstable at high roughnesses (.2-.4). If you have noise in the diffuse final gather and you want to render it via the MRQ, you can just crank the quality setting up higher and you’re fine. But if you increase the lumen reflections quality slider, noise gets worse. There is no way, no matter how far you crank the settings, to get genuinely stable reflections out of lumen, and I think that was part of the point @Sebastian was making, if I understand you correctly. And even with all that noise, lumen often over-blurs fine features, so detailed specular lighting can get smeared into uselessness.
After the noise improvements and transparent reflections from 5.1, the improved foliage support, surface cache, and thin feature shading from 5.2, and the (inbound) multi-bounce reflections and HWRT optimizations of 5.3, lumen is absolute leagues a more capable system then 5.0. When they get substrate fully working, I think we’re really close to having something that can fully match PT in quality. I’m really excited.
We need to keep Lumen separate from non-realtime applications.
This is biggest problem with Unreal, is all of the rendering crap. The more muddled Unreal becomes, the harder it becomes to optimize.
Krzysztof.N did say that struggling to improve the surface cache representation quality has been a major bottleneck of lumen. I’d be really curious to know what a surface cache-less version of lumen is, would it just be the ScreenProbeGather used recursively somehow? How would they stop performance from exploding? Is it just hit lighting for GI? I’m truly curious and really don’t know.
Well I have good news for you then. I heard that the Substrate team is working on colored shadows RN, although I don’t really know when it’ll be coming out. I got to try out colored shadows on NVRTX, and while the effect itself was really really impressive, the UI was so unintuitive that it really wasn’t practical to be using it unfortunately. If epic can make it ‘just work’, I think we could be in a very exciting place.
Fortunately I haven’t yet found issues with reflections. But I’m mostly rendering vegetation and maybe non mirror metals. I also found AO ok, even if imperfect and yes lacking in contact where it comes to details, but still ok in my projects.
The V shadow maps can also be dialed in a way that can provide good contact shadow and with some engine tweaking maybe even reasonable soft or area shadow where the shadow is at the distance from the object.
Maybe I’m not explaining very well, but I rendered some images to show where Unreal could benefit from improvements. Look what type of shadow the object has from indirect lighting. It’s sharp at the point of contact with the ground but gets very soft as the shadow gets some distance.
In the second image there’s a light projected on the wall and that projected light generates again a soft shadows but sharp at the base or at the contact points.
And a bit off topic here, but as far as I know, you also cannot get this kind of depth of field. Where the DOF is so large, the area in blur gets semi-transparent. And the edges are - again - transparent. If you can get that, please tell me, because I wanna build my own solution for that in Unreal, and I’m guessing it will take me quite some time to get that.
Virtual shadow maps are definitely closer to traditional shadow maps in terms of still requiring tuning to get good results. I’ve never had to do more than increase the SPP for RT shadows and enable the nanite ray correction, but VSMs required a lot of tuning for acceptable results. There’s a reason the documentation says ‘plausable’ contact shadows and not ‘physically correct’.
I see what you’re saying here, but in my eyes, that seems like pretty appropriate radiometric behavior. If you have comparisons to the PT however, that could shine more light onto the behavior. The path-tracer is the ultimate ground truth for Unreal, and I think inter-engine comparisons would require exporting the scene as an FBX and comparing against other renderers.
If you’re referring to UE rasterizer’s bad DOF, I’m not exactly sure what it would take to fix that actually, as that’s a very specific series of optical phenomenon. The big issue may be sorting problems due to translucency, but I’m not sure how a custom solution could performantly improve on that unfortunately.