Global Illumination alternatives

Hey, Yeah the reason i said id been asked by the moderators to ignore him was because i had been given an Infraction from admins after a complaint (wonder who that was). The reason it got me annoyed was the fact he just wrote of the whole system like he’s the formost realtime engineer in the world, When Morgan who came up with this and kindly shared the code really is one of the best in the world realtime researchers. If i just put up with him writing it off the chances of people asking for this to be included would be reduced.

Yes it is screen space, But is a far bigger step forward in terms of going beyond what screen space normaly is, theirs no reason why LPV couldn’t be tied to this for course off-screen Radiance mixed correctly like you said with well placed reflection probes. You could even include importance based voxel screen sampling/dissragarding like G. Papaioannou, does in his screen space Voxel based GI system Progressive Screen-space Multi-channel Surface Voxelization.from GPU Pro 4 (also has Opengl Code).

The real big reason it annoyed me was because half of what he was talking about was plundered from posts and research ive been doing into sparse voxel dags mixed with 4D visibilty field maps for secondary ray acceleration. I pointed the guy to my examples of code (NexusGL Engine) to point out if you want to bash other peoples research then at least have examples to show you know what your talking about. Such a surprise their was no examples.

Just letting people make stuff up that kinda sounds right if you don’t know what your talking about doesn’t help anyone.

I’m running the demo on a GTX 680 4GB and getting around 10-15 fps with Deep G-Buffer Radiosity: Performance mode at 2560 X 1538(8 ms for radiosity, 6ms for filter). The quality of the rendering speaks for itself, it’s quite good compare to the pre-render light probe. It doesn’t support PBR so imagine this thing optimized + PBR!

HairWorks would be a cool project but if I had to pick, I’d pick rendering GI anyday :slight_smile: Someone implement this!

Lots of interesting stuff in this thread.

At a personal level, most of us graphics programmers at Epic would love nothing more than to work on dynamic GI, however there are a lot of other tasks to be done that affect various things shipping. It’s a difficult thing to implement a good feature for UE4, it has to be much more robust, cross-platform and performant than what you might do for a tech demo or a single game where you know exactly how it will be used. Now I’m just making excuses =)

We did get a chance recently to add a major dynamic lighting feature that will be in 4.3:

it provides medium-scale Ambient Occlusion for the skylight, in a way that supports dynamic scene changes like walls being broken down or constructed, doors opened (all things that happen regularly in Fortnite). It’s computed in world space, no screenspace artifacts!

I have a lot of ideas for how to go from here to dynamic GI in UE4 but I’ll keep them to myself for now.

Here’s a new paper on “Cascaded Voxel Cone Tracing” I’m really impress with their dynamic GI!( https://www.youtube.com/watch?v=9bnfz3XjUxQ )

Link to the paper: http://fumufumu.q-games.com/archives/Cascaded_Voxel_Cone_Tracing_final.pdf
http://fumufumu.q-games.com/archives/2014_09.php#000934

My god those characters are creepy. The tech paper is basically like the Elemental reveal tech, but using cascades more efficiently, if that runs on PS4 then its quite interesting.

Honestly it doesn’t look impressive. What is shown on trailer is very simple. Vast empty spaces, with some moveable objects.

SVOGI was dropped, because it could work with large (2km^2 > ) areas with high density of objects.

If we are going to cascading, then adding cascading to LPV + some screen space GI, could yield similiar reasults much more efficient.

Hey ,

as you might now, we (Yager) are currently working Dead Island 2 and you might have seen the Demo we showed at Gamescom and PAX this year. We feature large open environments and every building is explorable…so basically almost no room for tricks since everything is somehow interactive and reachable by the player. I am not a tech guy…so a lot of the stuff people are talking about here…I have no clue :smiley: But I am definitely more techy than the regular artist and am really interested in this stuff. So I just want to share some things :wink:

Regarding the different lighting solutions, I have a couple of questions and observations (warning, its all mixed up in the following wall of text :D):

First off all…we tried Lightmass. Sadly, its not really managable to build something this huge with this system. The workflow is one thing, but we got to a lot of other limitations: the LA map (where one very small part of it was the shown demo) has a very dense environment with a lot of objects. So quite hard to to compare it to Sponza or the like^^ It had a memory consumption of around 1.5GB just lightmap/shadowmap data for a not yet finished level and additionally around 400 MB reflection capture data (and those cant be streamed right now and 341 is still not enough for a level this large)…I just wanted to mention it in regards to the memory consumption of cascaded voxel cone tracing :stuck_out_tongue: So we decided against it and tried to go for a dynamic setup. (we also had tons of other issues with lightmass but that would be too much detail for now^^)

We went to fully dynamic, just lit via directional light and ambient cubemap which was obviously not that nice since it lacked a looot of fidelity in the shadows (however, the leveldesigners and environment artists were the happiest guys in the world because they could just work and everything was “what you see is what you get”…they loved it). We added the LPVs on top of it as soon as they were available.

When you guys finally introduced the movable skylight with distance field AO…I thought like hell yeah!!! Thats it! :smiley: In the documentation for them, it says: The cost of Distance Field AO is primarily GPU time and video memory. In a fairly large Fortnite level, it costs 4.5ms on a 7970 at 1080p resolution. For reference, SSAO costs .6ms with this setup. ~150mb of distance field volume textures were used

So I enabled the Distance Field AO in our LA Level in a 4.3 testbranch and I measured 16ms extra just for the Distance Field AO. And we still dont have reflections besides the screenspace ones. Ambient cubemap does provide reflections, but then the IBL adds to the one from the Skylight…and only reatime gi from the sun (with tons of LPV artifacts). I have my editor window size at 720p and have a gtx670 at the office.

You said: “I have a lot of ideas for how to go from here to dynamic GI in UE4 but I’ll keep them to myself for now”. So I wonder…if DFAO is already that expensive just for this effect it is providing right now (and its not even perfect yet in terms of artifacts and other issues)…how will this work? I am asking this because for example with Lightmass or Enlighten, it was really and worked like a charm. But only in small or midsize environments. As soon as you go large, it all falls apart. I have the same feeling with the DFAO performance wise and sadly…with the voxel cone tracing as well. However, the newly provided paper about the cascaded voxel cone tracing and the proof of a nicely running ps4 game sounds quite promising to a noob like me.^^

Whats your opinion on realtime gi in really big levels?

Also…making a visual demo is one thing, but making something that runs in a huge game with all the other systems that eat up performance is something completely different.
I am really interested in this, because I do not only care about the visuals, but having a fully dynamic environment to work with has also tons of workflow benefits.

Also…what do you think about SSDO for fully dynamic environments?

I think it looks quite superior in this case to standart SSAO. You guys improved SSAO quite a lot with the recent updates, but I still think that SSDO would give you better results in a fully dynamic setup.

Please dont get me wrong if this sounds somehow disrespectful in any way. This is really not my intention! You guys have done some amazing work with the DFAO and I use it for some private projects at home and really like it (and you can enable reflections by using r.diffuseforcapture which is quite nice, but I think they dont get shadowed by the DFAO…that would be neat^^), but from a production standpoint, its not a feature you want to use right now in my opinion.

Thank you guys for all the insights in this thread here and have a good time.

Cheers! :slight_smile:

It does run on the PS4, which is fairly wake GPU compare to what’s in my PC(currently a GTX 680 4GB) - I wonder how it scales on superior hardware?

Glass is half empty? :wink:

You have to separate the art style/content from the GI. I was impress with the GI because given so little stuff is in the scene it rendered beautifully, generally lack of content on scene means you’ll have to make up for it with a GI system or it will look bad, some shot look like a Pixar movie(the orange robot).So it can run on the PS4 with a simple game, I guess the real question is will Cascade Voxel Cone Tracing scale to large open game(i.e. GTA 5) on a superior PC setup? And is there room to optimized it further?

For what its worth I don’t care what dynamic GI system Epic implements, but it should look as good as SVOGI/Cascade Voxel Cone Tracing - UE4 definitely needs some dynamic GI solution, the sooner the better.

These soft shadows are insane.

Livenda is doing some really nice stuff in Unity, not sure how it works with larger areas though https://www.youtube.com/watch?v=hqp7kHPVr58&feature=youtu.be&app=desktop

Hi Daedalus,

I just checked out the Dead Island 2 video from Gamescom that you mentioned. It looks huge and . However I see what you mean about the lighting being flat in shadowed areas. I wasn’t able to figure out if you guys require dynamic time of day.

Agreed, lightmaps don’t scale up to large levels.

These can both be solved, but yeah the memory is going to be heavy.

Without seeing how it was setup it’s hard for me to know why it was that expensive (and on what video card), but I would guess the grass causes a lot of problems for it, constantly moving and preventing reuse of last frame’s results. Costs depend a lot on object density, if there are lots of small objects it’s not going to go well.

To be honest I’m not surprised DFAO did not work for you, we have yet to get good results with it outside of Fortnite and test levels that we have constructed. The reason is that it requires kindof modular building, most meshes must be roughly the same size. In Fortnite because you can build and knock things down this was already the case. We’re looking for ways to improve this, but it’s very hard to make a general purpose yet dynamic method that doesn’t eat too much GPU time and looks great.

Definitely don’t use the ambient cubemap for anything other than a subtle ambient term, it is additive with the other forms of lighting.

Completely agree, and I think it is something we will tackle at some point.

It’s a really difficult problem! The thing with GI is, computing incoherent lighting transport requires a lot of computational power. You’re going to have to pay for that somewhere, either on the developer machines (static lighting) or the game client (dynamic lighting). A high quality dynamic method like voxel tracing is going to eat your GPU for breakfast. Low quality methods can be done like LPV but then it leaks everywhere, which is a big problem with your seamless interiors and thin walls.

And sadly, even if we did have a great dynamic GI method it would only solve some subset of the total needs from the engine. This distance field AO stuff is evidence of that. But developing and optimizing these methods take a significant amount of time. SVOGI was roughly 6 man months to get it to the early stage.

If I were developing technology for exactly what Dead Island 2 needs (huge world, seamless interiors with thin walls, dynamic time of day) here are some options that I would consider most promising.

A) Try to get good distance field representation of the scene working. Use for AO and distant shadowing of the sun to reduce CSM cost (this is already implemented through ray tracing the distance fields and will be in 4.5, but not optimized). DFAO handles thin walls fine. By using a sky cubemap that has bounce lighting color in the bottom + good quality AO, you can get something that looks pretty good in those shadowed areas and really brings back the depth. None of this addresses specular though. I would try one reflection capture per inside of building and improve that system until it can support levels of your scale. To implement time of day you lerp the cubemap used by the skylight + a few versions of the captured reflections.

This whole method is probably medium to high risk because DFAO is unproven on consoles.

B) Use precomputed diffuse probes. These can be much cheaper in storage than the reflection captures because they only need 27 floats. Lots of games with huge worlds and time of day have done this - Assassin’s creed for example. You compute them at multiple times of day and lerp. You sparsely capture them with a cubemap (reflection captures) and use for local reflections. SSAO provides local shadowing, and maybe you can bake in some per-vertex shadowing if you can afford it. DFAO could be used to shadow the diffuse probes but it’s kindof overkill, it will cost too much for what it gives IMO.

This method is low to medium risk, but has lower quality in my mind.

C) Another method for local specular is to have a simplified version of the scene which you can quickly render to a single dual-paraboloid map at the player’s position, use this for all reflections. This is what GTA did I believe. It can be a big pain to create and maintain the simplified version of the scene, but this works great for preventing you from ever seeing the sky in reflections when you are indoors.

The common theme in all of these it to take advantage of the fact that the geometry is mostly static, so you can precompute the shadowing part of the light transfer equations. Like I said, for dynamic GI you’re going to have to pay the processing cost somewhere.

You miss the point. The more empty scene the less voxels. The less voxels the less memory os consumend (how much memory is taken and how much data must be transferred).

Yes, less content means GI have to do better job and working out proper lighting. But some solutions with more empty scenes will suddenly gain on performance.

Anyway, in the past I wrote that probabaly best compromise between dynamic and static GI solution would be some kind of radiance transfer solution.
Like this:

Or this:
http://www.ppsloan.org/publications/drt.pdf (haven’t had good read on it but I think it extension of previous paper).

MRT, doesn’t use probes to store SH in them. Instead it uses simplified geometry (I believe Enlighten is doing something similar), to store precomputed data.
Of course from art point it introduces additonal step of creating geometry cage for existing objects but:

  1. You can generate them automatically. They don’t have to be terribly accurate. Or you can use simplygon ;).
  2. In most cases we already have simplified versions of meshes for LODs.

Of course this techniques have limitations like only static geometry will contribute to indirect lighting. But that’s no different than for example lightmaps. And most games consist of static geometry, but want to have fully dynamic lighting for some reason.
It’s not very accurate since we don’t store any image data, just some equations. But, in reality indirect lighting is not very accurate, and instead of trying to be super accurate it’s better to take advantage of what human eye can’t see or perceive well enough.

Also using MRT there is reduced leaking of lighting information (since we don’t use probes to store lighting data).

What is really important for me, is using radiance transfer, we can get in theory we can get large amount of lights, dynamically contributing to GI, without much of performance impact. And using more novel techniques, this can be even useful in tight spaces.
Like pitch black dungeon, which is lighten only by player torch.

Also there is possibility to get more indirect and accurate shadowing (like on the old Enlighten demo).
That demo by the way still makes my jaw drop.https://youtube.com/watch?v=eFHxluXS3KM
First second of movie. Look at shadows near columns on floor. And that was 2010!

Also radiance transfer can be computed much faster than lightmaps using GPU, and more over it takes far less space than lightmaps, making it perfectly viable for big open worlds.

I hope someone could tackle something like this and integrate it into engine. I’m not smart enough to even figure out where to start with it ;(

There is also this:
http://lightsprint.com/demo.html

Quite old project, but looks cool!.

post!

If you have the ability and the time you should definitely give it a try. I’m sure the whole UE4 community would benefit from and appreciate your effort. I love reading all about the different GI approaches and the pro’s & con’s. But I like others can only understand the concepts and wouldn’t even know where to begin trying to get this working in UE4.

I appreciate Epic have lots of other work to be doing but If people like you could provide them with a working example of a new feature they (1) would take notice (2) could build upon your work and would have less to do (3) see that this approach to GI doesn’t work and try another.

At this moment our game is making great progress with implementing an even better hair solution that Hairworks is! Gonna share our progress within the next 1-2months though, since we have other priorities at this moment (animations), but yeah… , share your tech with other UE4 developers, think that’s gonna be huge time saver for them! :slight_smile:

I would love for someone (you! :D) if you could try to tackle Delta Radiance Transfer. Bit modified. The original technique is using simple geometry, which is automatically wrapped to meshes during precomputation step. Those simple meshes have have tradiance transfer computed only once, which makes for very fast precomputation, but lower quality for meshes, that are not exactly fitting within primitive geometry.

I think this step could omitted in favor for user generated cages. Even collision geometry or LOD levels could be used as primitive geometry to store and calculate radiance transfer.

This of course would dramatically increase precomputation time, since there is really no reusable data, as every static mesh would have unique geometry cage. But I suppose it still should be more efficient than Lightmapping.

What is important about this technique is:

  1. Many multiple bounces, without much of performance hit.
  2. Support for large scale ambient occlusion can be added. As well for medium scale.
  3. Can be extended to support subsurface scattering.

As I see it now, the hardest part is basic integration into engine. I have no idea where to even begin with it. Even with most basic version. Second is precomputation step (again integration into engine).
Math doesn’t seem to be terribly complex. Probably the hardest part about math is Spherical Harmonics, used with probes for dynamic objects.

edit:
If we got his working, we wouldn’t even need special shading model for tree branches. GI should take care of light bounce from ground and inter bounces withing tree. And even if, the DRT could be expanded to support proper lighting transmission trough thin objects (albeit only on static objects).

I think it have much more premise than screen space GI, which forces you to render frame several times, just to get semi stable results ;).

I plan to upload the code that I have to either my account or some other system, talking with AMD about where it should go, in which case it will be available to everyone to obtain and hopefully improve. Would love to see the progress you guys have made, looking forward to that.

I am currently reading through different GI techniques, can’t guarantee anything at this time, but I do believe that I do have the required ability to integrate into UE4, I have a pretty firm grasp on the rendering architecture now. But I’m not sure when I will be able to take it on, maybe once I offload the hair code to the general public, I can remove myself from that for a while and focus on some GI.

I will add to my list to take a look and see, not a big fan of the precomputation step, but I suppose if it speeds up the realtime aspect, then might be okay.

Well yeah, precomputation is downside, but you have to offload some calculations somewhere.

At least after precomputation you can (in theory) add arbitrary amount of dynamic lights, they all can contribute to GI, with arbitrary amount of bounces and don’t have any bug impact on performance.

Where with fully dynamic solution, you will kill user PC, or with light maps, you would wait ages for lighting to rebuild.

But besides that. Looking at Enlighten demos, you wouldn’t need to add big amount of lights in first place. On where it makes (realistically) sense.
DRT is very similar to what Geomerics use.

Eh, there’s a reason people don’t like it. Art production just gets more headaches, which is not something anyone needs, and diffuse only is rather useless for anything but specific art direction. Offline CG had reflections before it ever had proper diffuse GI, and yet it still holds up (if done well) because reflections are incredibly important for distinguishing between materials.

The reason many are so interested in voxel cone tracing is because it requires absolutely no art production (yay!) and can scale between diffuse low frequency GI and high frequency GI. Despite Epic’s continued scepticism, I’d say voxel cone tracing will end up in triple A games this generation eventually, they gave up far too easily without looking at brick maps, tiled resources, null pointers for empty voxels (a feature in D3D11.2) and etc. It can work, and scales very well with number of dynamic lights.

There is no added cost to art production using DRT with premade geometry cages. You can either generate them automatically using LOD generation tools, like Simplygon, you can reuse LODs you already created, or you can reuse collision geometry which you already created.
All you have to do next is to precompute once, and you can modify lighting as you see fit.

You are welcome to pursue SVOGI and I wish you good luck, but honesttly I think it is dead end, unless you can show me solution which will work on 5km x 5km landscape full of objects.
All attempets that have been shown today, are confifned to either very small spaces, or with pretty much empty maps.

And yes all radiance transfer techniques do not work well for glossy reflections, but they work well for solving big subset of issues quite well. Like light transmission, indirect shadowing, and diffused reflections.

I’ve already got ideas for large terrain, and a demo I’m going to port to UE4’s Elemental Demo, yay Pixar and their ambient/faked bounce GI! But I’ve talked to artists using DRT like solutions, and they already don’t like geometry cages, and you have to pre-compute every time you change the environment, which slows down level design immensely.

I do wish you luck as well, any realtime GI is good GI, as long as it has acceptable image quality (so many solutions in the past haven’t). I’m just always suspicious of a technique that seems good on paper, but has been around for a while without a single game (that I know of) shipped.