Screen Space Global Illumination (SSGI)

I haven’t heard any info on this. What’s the status, is this coming out of development and making the next release?

Timelapse

[video]https://twitter.com/i/status/1102033507386314752[/video]

It’s currently available in the dev-rendering branch and will be in 4.23 per Tim. It’s been pretty fun to play around with. It takes the scene color, so you get GI from pretty much every material, but it’s still a bit expensive and you need a fairly high number of rays, otherwise TAA doesn’t do a good enough job at denoising.

The first one is with a baked skylight and a dynamic sun

wow thats actually pretty cool. I wonder why there isn’t more buzz around this…

Definitely a nice addition. Just be aware that it comes with all the limitations and artifacts you’d expect from a screen space solution.

Because it only works well for exterior scenes and it’s expensive. Seems pretty situational in it’s usefulness.

It works with lightmaps and it should be decent way to give some dynamic bounce for the scenes.
Apparently it also replaces SSAO, so it is step to the right way and reduces the horrible dark corners look.

Cannot wait to get my hands on it for testing.

So this is really cool. I’ve been wondering whether this would be possible for a long time, since it’s a seemingly natural analogue to screen space reflections (albeit a worst case with rays traveling in every direction). I started playing around with this, and the results are indeed pretty similar to static indirect light in cases where everything is onscreen. I’ve got a bunch of thoughts, but since I’ve made no effort to look into how this particular implementation works, they may turn out to be stupid thoughts:

-Is there (or could there be?) any pre-filter/blur of the buffer that’s being sampled from, to reduce noise? It occurs to me that this might actually work with SSR as well, at least for surfaces

-Is it possible to make this effect only apply to dynamic lights, or to be toggled on a per-light basis? From my limited test, it seems like the SSGI only falls back to baked lighting for rays that “miss” the render target*, but I haven’t found a way to make a static light used baked lighting exclusively. This would be super useful in situations where most of the lights are baked, since SSGI still doesn’t perfectly match the quality of baked lighting.

-In the same vein as how SSR and reflection captures are mixed, it would be really neat for this to be able to fall back to these localized diffuse captures in situations where baked lighting isn’t practical.

I’m hoping that this will at least reduce the number of games where even extremely bright flashlights inexplicably fail to produce any bounce lighting.

*(edit) Actually, apparently it just applies the SSGI on top of the static indirect light. I thought this behavior was “incorrect,” but as long as the indirect occlusion component is applied to the baked lighting before the additive bounce light, the overall should be (roughly) conserved. Being able to apply the effect more selectively still seems like it would be extremely valuable.

I also notice that SSGI respects the “indirect lighting intensity” post-process value, meaning it can be used as a replacement for SSAO without any bounce lighting by setting the value to 0.

seems like it and LPV is a good combination.

I think this would work well in terms of visual fidelity, but I don’t think it would be the right combination in terms of performance.

I believe other techniques like image-based probes, volumetric lightmaps, or even raytracing would get a significant boost if they’re only applied to pixels that aren’t already accounted for by the SSGI, since their cost mostly scales with the actual number of pixels rendered. As far as I know, LPV wouldn’t be able to benefit from this, since its cost is dependent on the size of the volume, which would need to be computed in full regardless of how many pixels it affects.

I tried it out. I threw in some emissive niagara particles and *it just works *(rtx didn’t…). You can tell that it’s limited to screen space but it’s nice to finally have a fast, realtime GI solution other than rtgi. I hope they add a denoiser similar to rtgi though, low light GI is extremely grainy even at Quality 4.

video: Login • Instagram

You can increase the number of rays and samples like with SSR, if you don’t care about performance, in ScreenSpaceDiffuseIndirect.usf. Increasing rays should improve the accuracy of the GI(around small features) and the samples will reduce the noise.

I really hope they include this on the next release.

According to this post, SSGI should officially appear in 4.24.
I was hoping for 4.23, but it’s still very nice that they work on it.

It is in 4.23, it just isn’t publicly listed. Still the same ssgi.quality cvar.

​​​​​​​

Although I did notice that there’s a regression in Preview 1 that completely breaks AO (in screen space, at least) when SSGI is active. I found a (*very *easy) fix and submitted a pull request at https://github.com/EpicGames/UnrealEngine/pull/6032/files

Does anyone know if this feature made it into 4.23?

You can test it by typing “ssgi.quality 4” (or set the quality lower if you wish, 0 turns it off) in the console. So it’s not officially released (that’s planned for 4.24 after more optimization as far as I know), but it is here in 4.23.

Sry for asking but, what does it actually do?

Does it work with forward shading ? In VR too ?

It does work in 4.23 by enabling through the CVars. However, it does not work very well with raytracing (surprise). So once SSGI is enabled, it disables your raytrace Ambient Occlusion and replaces it with its own screenspace AO - however there doesn’t seem to be a parameter to tweak that as all regular AO post process effects doesn’t change anything. The SSGI looks amazing, but it’s just not usable, basically like raytracing. In a couple of releases, combining these features will be very powerful I hope.