First screen is built in editor second one is all built in runtime including lights (meshes are also imported in runtime). For some reason runtime lighting and reflections are worse, they also take a greater hit on performance.
Furthermore runtime placed lights seem to have screen space emission, in this case I moved my camera past the “range” resulting in light not reaching camera anymore while that doesn’t happen in editor.
Yes, none of those settings make any difference. The following screenshots were both taken at Epic shading quality.
Light emissions don’t escape out of screen space in editor while they do in our runtime example. The difference however is that we called these light actors in runtime and not placed in editor prior.
Could it be that the imported parts during runtime have different light attenuation settings? The more overlapping light attenuation you get the more costly the calculations.
Although if the runtime version is the lower pic it seems darker.
The lower picture seems to have lights in a different position, like it’s behind the door frame instead of inside of the room. Could it be a matter of lack of position offsets on import / creation with regards to the world center?
I’ll have to confirm the first question.
As for the second, all lights, positions and parameter values are identical in both. That was the purpose of the test. If I moved my camera slightly lower - where lights would be more towards the center of screen space.
Here’s the quick demonstration. First scene was placed in editor prior while second one is entirely placed in runtime. All materials and lights have same parameter values.
It’s either camera auto exposure or are you using the experimental screen space global illumination?
It would make sense that the scene gets darker without lighting information being present in screen space when you look up.
I just checked, all we’re using is Lumen as GI in engine rendering and in PP volume all settings are pretty much default Lumen except sky leaking but I assume that still shouldn’t make difference between editor and runtime scene since editor one is working completely fine.
The only Beta feature enabled is Virtual Shadow Maps that we’re using.
Yes I’m aware about the big/complex mesh limitation in UE5, however it’s working fine in our case in editor so it shouldn’t be the case.
After further testing I figured it’s something to do with mesh importer. Pre-cached mesh that’s called in runtime along with runtime called light actors work exactly the same as the editor one, a crisp result. So we’ll take a look into that part.