Thank you so much this is really valuable information, I was checking the console commands right after I wrote the last message when I realized that you already put these things in the r.MegaLights.DefaultShadowMethod
cvar. Thanks again, I will keep testing megalights but atm I’m having a lot of fun
Enabling Megalights from the post process volume with the light complexity viewmode enabled crashes the editor (works fine if you switch then enable the viewmode)
I’m curious how we should be interpreting the light complexity viewmode with megalights. It seems attenuation radius is a lot less important than light intensity, with more intense lighting increasing complexity, but it also increases with specular highlights. Complexity also doesn’t seem to increase on backfaces or shadowed regions.
The specular cost in particular is pretty interesting, as it leads to high complexity on “bowl shaped” surfaces even when they’re only affected by one light…
I guess what I would ask is; How well does the new complexity viewmode actually map to the performance cost of megalights? Are cases with lots of specular highlights or high intensity lights something we need to be concerned about?
Also not sure if it has been mentioned but masked RT shadows, hope this is planned for the future:
it does masked materials to some degree using screen traces, like decals, works for close range, like hair cards and things. the screen trace distance is adjustable. does not change much for big scale models, tho. it’s screen space. worldspace traces just hit geometry in the bvh or mesh distance fields. it does not sample the textures (yet).
this may come at some point tho. cause foliage in the sun (or moon) looks great. but… at night in the city it does not look too good with just polygon shadows, well… yet. i’ll not throw the test screenshot, there. not nice.
I feel like I saw a CVar somewhere that enabled calling the anyHit shader for MegaLights, but it was disabled by default due to the obvious cost increases. (maybe I’m thinking of RT shadows?)
Tangent: do you think it would ever be feasible to create a baking tool that turns alpha-tested geo into a pure geometric representation? E.G feed it a plane with a grass alpha texture and it would output a mesh that just captures the grass. Given the significant performance hits I’m seeing from both nanite alpha-test and the AnyHit shader in RT, I feel like it would be a stellar performance optimization tool.
a rabbit hole? hmm… well… a bvh programaticly is a tree structure of bounding boxes. you run/trace thru that. at some point you hit a polygon and it’s data or you hit distance field volume data.
on a polygon the uv coordinates could be present and sampled and you need a texture id (memory address) to figure out what alpha texture is on it. then you could use those coordinates to sample it at the hit point. that’s how i imagine it. not accurately developed. just pseudo coding. getting that level of direct metal languange is not hlsl tho.
converting a texture into geometry seems like a mundane task. just create a grid of specified density and don’t generate the polygons below alpha threshold, in the masked material case. lil optimization to merge and smooth continous faces and compact the data. hmm…
I’ve seen a couple of tools that will do this but they only work on flat axis aligned flat quads.
hmm… a good modelling pipeline is usually all quads. you gotta do it at the authoring stage then. bake the shadow polygons in maya and deform and split them to look like or fit triangle topo. quads are a lil cheaper interchange format for sure. 4 points, not 6. but pc rasters triangles. and it has kinks. hmm…
(and that’s me dabbling on the nds, which actually renders quads on scanline. crunchy lil machine. :))
anyway… i think we’re over the threshold. is this feedback? or random ideas? not that it hurts…
I think we’ve crossed into random ideas territory, or at least should start putting the conversation in hidden blocks so as not to derail things (we save that for the lumen thread ).
That said, the core of this conversation is very relevent: alpha-tested geometry is no longer a performance win, for ray-tracing or visibility-buffer renderers like nanite; if someone on Epic’s tools team could make a conversion tool like they have for nanite displacement, it would make it a lot easier to use things like MegaLights without either content or performance issues
Major lighting bug @Krzysztof.N : In the 5.5 preview build, significant screen-space artifacts exist around POM-enabled materials (same platform as lumen thread):
They get progressively worse the closer you get to the POM surface, eventually enveloping the whole screen (western pack assets in case you want to replicate):
POM seems to be working mostly fine for me. The ray traced screen traces can actually deal with pixel depth offset (like a better version of contact shadows, which could also do the same) and look great as long as the light source is on the screen…
Once the light source is off screen, the usual self shadowing issues arise where rays miss. (or an occluder can cause missed rays too)
Is that perhaps the artifact you’re experiencing?
It would be nice if we could enable an object to cast only screen traced shadows with megalights maybe - just like you can with normal screen space contact shadows.
Could be useful for POM/PDO and small objects like animated foliage we maybe don’t want in the BVH but might still want these screen space shadows on.
greetings!
i am interested in the supported platforms: i presume it will work somewhat reliably on recommended Linux (Ubuntu). what about Macs?
thanks for the hard work, fiddling with it on win currently, future definitely looks bright
The only known limitation is that MegaLights doesn’t support strand based hair.
Seems to work on my end. For some reason MetaHuman has missing arms in RT BVH, so no shadows from arms, but otherwise works fine. Any specific repro steps?
Didn’t crash on my end, so I guess this is either already fixed bug with MegaLights crashing when Lumen is disabled.
It doesn’t work correctly, so I wouldn’t read much into it.
In the current test, more than 4096 lights in the cone will appear light loss, is there any parameter that can extend this upper limit, which is important for the night scene of super cities, if not, will the future consider expanding the number of lights to support more?
The above text is from the translator
Yes, at the moment there’s a 4095 hardcoded light limit in camera. There’s no parameter to extend it, but it’s a relatively simple change in code, so we will increase it for 5.6. I just didn’t expect that anyone will reach that limit :).
Now I’m curious how many would you realistically need? Specifically how many in camera? And how many in scene?
Any thoughts on the possibility of allowing casting screen traced shadows only as an option? I was really impressed by how they compared to the old fashion contact shadows and it may be able to fill a similar role.
8.3k in camera ,15.8k in scene. make some trick for lights, used too much time,megalights is very important for this
Oh, is this still happening? I reported it long time ago and it looks like an important thing.
Stunning work, btw @Krzysztof.N ! Thank you very much for making this possible.
PS: Megalights num samples per pixel doesn’t allow a value of 8? I don’t notice any visual nor performance difference between 4 and 8.
Noticed too some silhoutte artifacts in some unknown situations, depending on angle (using new megascan unfinished building):
Fine:
Artifact (just a slight angle difference). The effect can be even much more noticiable/longer than in this screenshot:
I think it’s related to Nanite Tessellation behind. It also happens at some screen edges.
MegaLights docs are now online.
No, at the moment it’s limited to a few values which are listed when you enter that cvar without any value.
That sounds like your RT representation mismatches raster, and when screen space traces fail you see shadow from that RT representation. Either need to adjust the Nanite fallback mesh or it’s due to displacement which isn’t supported by RT.
Was so excited for this for an upcoming project at our studio, then discovered it doesn’t support strand based grooms and hair. I hope they get support in the future.
i’ve not worked with grooms yet. are the strands full cylinder geo or screen aligned/“billboarded” loops with a gradient? aka some sort of “line” geo.
cylinders are lot of geometry to cut into small pieces for the bvh. not gonna be performant. screen aligned i imagine could happen at some point. haircards - while noted to avoid masked materials - are the way for now. hmm…