i’ve been rehashing tlou part 2 today. i know they used baked directional lightmaps for the world shading and a hack ■■■■ of irradiance volume data for character model gi. it looks good. yo
reflections are whole different beast tho. at low roughness one could use the irradiance probes, higher up use roughness mipped parallax correct captures or go full on raytraced for flat out mirrors. it’s a load of shader math either way. and a bit of data to support all of the tech blend. hmmhmm
I was actually working on a type of ray parallax corrected cubemapping system as an experiment recently, although still too early to show anything, and not sure if it’ll ever get finished. But the ideas were to either use a depth map technique like POM or SDFs to trace the cubemap against.
Not really intended for fully dynamic real time reflections, but to allow for higher quality parallax correct captured reflections for edge cases that lumen struggles with, such as perfect mirrors in indirect lighting environments.
Also have some ideas about using screen space lighting and shadow techniques on unlit cubemaps to allow for seemingly dynamic re-lighting on a static cubemap.
Although most of the time I suspect it’ll be better to just have multiple cube maps for different lighting.
But not sure how much effort to put into it if it’ll be obsolete before long.
Same scene with thin geometry for walls looks better but has some corner leaking, so i made the room from individual thick parts that should do better and lumen now does worse job somehow.
This is on a macbook(m3pro) 5.4.3
Normals are correctly oriented, no weird scaling or transforms, no holes…
you’re obviously missing surface cache data. it’s all pink. and you got no normals aka it’s a hollow distance field. what did you use as the base mesh to build it? does this mesh have a proper distance field? there are console commands and viewport options to display that data.
the artefacting is temporal and probe occlusion cause the cache data does not exist. it’s juggling the light computation with probe data only.
Yeah, but it did compute properly when the meshes were just planes. Right now its just basic “blocks” boxy meshes to make it as simple as possible. I kinda have a suspicion this is mac only problem. Will try to get the meshes to PC tomorrow and check it out there cause I don’t remember ever having this issue on pc
There’s something really weird with geometry - notice in the first view how normals are broken and some meshes are missing. Like if all your SDFs were generated as two sided (depends on the default material and two sided setting in the static mesh editor), which we treat as semitransparent foliage. Try changing them to one-sided.
Also if this is like archvis or something, you likely should use hardware ray tracing.
That was the issue, however it couldn’t be fixed while in editor. Not even at reimport would it recompute sdfs after i made sure the material is one sided.
Since I had simple planes as walls before I did setup the default material as twosided, but that material would also be applied by default on reimport of meshes which were now thick walls. I switched twosided option on material off but it didn’t do anything until I restarted.
I hope I will be able to in 5.5 finally. As of now not even though m3pro supports hwrt, I cannot use it in unreal.
Anyways, it does look like importing mesh while having twosided enabled by default on the material was indeed causing the issue!
EDIT: I can see now that enabling and disabling the two sided sdf generation in static mesh editor can easily make those meshes hollow and fix them too by disabling it while in editor.
So I assume that option was simply enabled by default on import cause the material had two sided option enabled? Or what esle could cause this to be enabled on import?
@Krzysztof.N Just out of curiosity, is MegaLights expected to go into Beta for 5.5, or is it still a ways off from a servicable implementation? I know you talked about potentially creating a MegaLights feedback thread when the time was right, and there were some possible bugs I wanted to share if there was a good place to do so.
you mean manylights? that is already in “beta” or rather alpha stage via cc. might aswell throw the feedback in here for now. i did too. rather a couple of us did. feedback is always good. get it better for beta stage. you know?!?
I’ll make a dedicated thread when it will be officially announced as at least experimental. For now officially this feature doesn’t exist even though quite a few people already use it and are happy with the results :).
As for the bugs and feedback please write them here, this way we can look into it and maybe improve something for 5.5.
Two-sided SDF is generated when you import (or hit apply in the static mesh editor) and either you have enabled “force two-sided SDF” or your default materials in the static mesh are two-sided.
wdym? i haven’t touched the source in a couple weeks. out of space. it’s MEGAlights now? ohh well…
maybe change it to SSRTMLS - OIRTINAWUJSTADFS aka screenspace raytraced multilight shadows - or if raytracing is not available we use just screentraces and distance field shadows.
Honestly I’m just excited the tech is moving into experimental. It’s always surreal talking about experimental stuff such as this with other devs, and I forget that a lot of the stuff we experiment with isn’t even live yet. I like MegaLights as a name, it’s descriptive and fun at the same time. The main thing I’m excited by is just that we now have a lighting solution that can coherently handle basically anything you can render to screen, and theoretically hundreds of scene lights at once.
Is it something similar to the RTXDI where you can render hundreds of local lights without experiencing a performance hit?
In our archviz app, users can place lamps manually into the scene and there are many cases where there are many overlapping local lights which cause a big performance hit (think of big, open office rooms with lots of ceiling lights closely packed), would be great if this would provide a solution for such cases. Of course such rooms are easy to fake by combining many lights into a single one, but not all users have an technical understanding of it and it is not always possible with an archviz scene.
Longer version: it’s Epic’s version of RTXDI tuned for UE-specific use cases. The wrinkles in MegaLights are that it’s designed to handle local lighting while directional lighting remains its’ own thing, it’s more tuned for hundreds of lights than thousands, and it can use many different tracing methods (screen, DF, HWRT, etc) to scale across platforms.
What’s the connection between MegaLights and VSM? Can they be used together, directional lights VSM, other lights MegaLights?
Or VSM for Nanite heavy scenes and MegaLights for non-Nanite usecases?
MegaLights gets a lot of love according to the git commits and VSMs were never really that great IMO due to the reliance on Nanite. So are MegaLights the new cool shadowing technique?
That seems to be the idea, VSM for directional/hero lights and MegaLights for supporting. I read a bit on GitHub about how VSM/Megalights can now interact in some way, but I don’t know what I’m looking at so I won’t speculate.