Global Illumination alternatives

This is insides thread: [Request] decouple ambient cubemap feature from postprocess into separate entity - Feedback for Unreal Engine team - Unreal Engine Forums

I posted a step by step description of what I am doing with diffusefromcaptures to make it work yesterday :wink:

Thanks , appreciate it buddy :smiley:

If the distance fields are stored in a volume texture (which I believe is 512x512x512), shouldn’t we be able to use that to store color information, thus allowing for voxel cone tracing? Looking at the code, it looks like they already use cone tracing to generate the soft shadows.

You guys might be interested in this commit in master: :wink:

https://.com/EpicGames/UnrealEngine/commit/7709caa98566d0271f00ab03f1fee97ec92661e4

0o. Hopefully something usable will make into 4.7 At least with cvar.
Or to some promoted build

Sounds good–he had mentioned in one of the streams that he figured it could be used for that, looks like it’ll be a 4.7 thing

That is very cool, but I have no idea what that means for the average user…

Dynamic global illumination meshes supporting distance fields.

I wonder what limitations it will have (aside from distance fields).

He said on the stream that you’d only get a single bounce from it, but that’s fine

Hmmm…I am actually not that happy about this :S
Distance Field features work nice in theory, but we tried them in our game and they are so extremely limited and performance heavy, that it feels quite detached from customers to continue adding additional features based on this technology. I dont want to sound rude by any means, but this is like: Yeah I know that my car is incompatible with wheels and will never really drive, but I am still adding 2 more gears to shift and a stronger engine.

I think if Epic cant get rid of the limitations and improve performance drastically, this will probably be a “totgeburt” (dead birth). At least from a AAA developer perspective. It might work for portfolio pieces and other small stuff, or maybe traditional corridor shooters. But yeah…not really for other stuff as there really are tons of issues.

That was like month ago. In engine development it’s equivalent of stone age :D.

@up yes. But, these dynamic lighting features like DFAO or similar are really high end features, which requre minimum precomputation. In case DFxx you pay cost of precomputing distance field on mesh, but everything else is dynamic.
All things considered, it’s still cheaper than voxel based techniques where everything is dynamic (including voxelization).

To compare you them:

  1. Distance field works on very thin geometry, since there is no leaking.
  2. Distance fields are cheaper, since heavy step of creating alternate representation of geometry is precomputed.
  3. They works on any geometry with distance field.
  4. Voxels are much heavier, since they need to be computed on runtime (in theory you can precompute them, but it will take lots of space to store them).
  5. Voxls are usually causing light leaking, since well, They can’t be to small.
  6. Voxels work on fully interacive environments (including deformation, destruction etc). Something which distance fields can’t inherently support.
  7. Both are high end features, since computing at runtime indirect lighting is very heavy on performance.

DFAO is mostly heavy because of foliage. But in that commit there was added option to disable it for foliage. You can also consider disabling it for small props terrain meshes, which do not contribute enough and there usually thousands of them.

I personally did not have any big performance problems with DFAO. I had much more issues with LPV.
It was leaking terribly (on seemingly open city map!), and it tanked from 60 to around 35-40fps, thanks to rendering RSM.

If this Irradiance DF will be as good as DFAO I will be perfectly happy. And if in future it will support second diffuse bounce, I will send hand made pie as thanks (;.

Either way I agree there is still improvments to be made on performance front. UE4 sitll perform worse than CE3 out of box with fully dynamic lighting. Especially on open maps, with lots of meshes.
But I don’t think there can be much done using DX11. We will have to wait for DX12, to lift some of limitations (like slow resource binding trough CPU).

Distance Fields are supposed to be pretty fast, at the very least this should give better results than LPV and perform better as well. Only issue is that it doesn’t work with deforming objects, though he was thinking of ways to help with that.

I really dont want to come across overly critical or as a hater (actually I have some deep respect for for coming up with this technique), but since we evaluated the features in a real production environment without any kind of success, I just think they are not well suited to be the dynamic replacement for lightmass (HA…thats it! I think the most important thing is to realize: Nope, we dont want some fancy partly working thing here, we want a fully dynamic Lightmass replacement^^). If you guys want to, I can share more in depth thoughts on this, but I dont want to make this unnecessarily long^^

Honestly, what I really dont get is the following: Why not implement the solution that the Snowdrop Engine uses (The Division)? I dont see any kind of reason against this solution and to me it sounds like the perfect fit for this generation.
I know they havent yet shared how they do it, I have a friend who is working there and he was willing to tell me a little bit, but really not that much since they have super strong NDAs. BUT, when you know how Ubisoft does their lighting (AC Black Flag slides etc) AND have a graphics programmer that understands this technique well enough to explain it to you AND you listen to what your friend has to say who works at Massive AND (this is the last one^^) you check out all the engine videos that were released for The Division, you can actually get quite a good understanding of what they are probably using :smiley:

So its basically the same technique as AC:Unity but with some additional advancements. General diffuse GI is captured in spherical harmonics. They capture 6 different lighting states to blend between them (TOD). Since Unity, they also capture indirect specular. The SHs capture the sun and any placed light in the scene as well.

With the Division they seem to have introduced updating SHs around the player in a certain radius so you can have fully dynamic GI (you can see this in the video where the light is hanging from the ceiling and it shakes around and when it lights up the red wall to the right, you get dynamic red bounce light onto the white ceiling) So I assume that the TOD is half dynamic, but around the player, the probes can be updated dynamically to allow some nice colorbleeding effects.

In AC, the SHs get placed via NavMesh (since you can walk on everything, this makes sense) however, if you dont have a NavMesh like this, just use volumes like the importance volumes and cast a ray and check where it hits geo. SHs that intersect with walls will get deleted or disregarded. Maybe you need super low poly proxies for interiors, I am not sure about that. But you could also distribute the SHs cascaded to have a super high density around the player, or take volumes with higher resolution for interiors.

This is all pretty basic in terms of how I am able to describe it, but do some research on it…its very interesting and this looks like the most solid solution to me. Yes you do have to precalculate the SHs, but according to the AC slides, this takes just 8mins for the Havana map and can be optimized even further via GPU computing etc. Oh and btw…the Black Flag GI runs on XBox360 with only 1.2ms!!!

So again…I think that this sounds like the most usable option to me without any weird limitations like: ohhh…but it doesnt work on foliage, oh it doesnt work with skel meshes, oh it also doesnt work with WPO or did I mention non uniformly scaled meshes yet (yes I know that they already have ideas to fix this, but not the other things. And when they say: Yeah so use CSM for closeup, I just reply: why implement soft shadows if you cannot see them because you are using CSM instead?..BTW, there is a cool technique which is called percentage closer soft shadows. FC3 uses it on PC, runs like a charm, looks good enough and has non of these limitations)…cmon, its 2014 and there are some pretty neat things out there to be inspired by! :slight_smile:

I would really love to know how the Snowdrop Engine does its magic, but the most interesting thing is this: My friend said that, when I asked if the Snowdrop really looks this good and if it runs well and etc. (just all the stuff you ask because you cant believe that this could really be true^^) he said: Man its exactly like that! The Engine looks like in the videos, thats how it looks in the level he is working on. Yes, its fully dynamic! It runs with 40-60 fps conctantly in the editor with a level as crowded and detailed as seen in all the gameplay demos etc. He said I dont know what kind of magic drives this thing…but its just ******* amazing and a blast to work with.

I would love to see the lighting of Unreal 4 going into that direction because honestly…it also looks better than lightmass in my opinion.

Argh…and there you have it…still wrote a whole book again :smiley: sorry^^

^Probably why

I think Epic would want a solution that can work with more cases–a lot of the GI solutions out there are only a solution for large open world games that need a TOD and don’t want to waste storage on huge amounts of lightmaps. In other cases, those solutions suffer.
Like for instance in Assassin’s Creed Unity, there’s issues with bleeding and indoor lighting.

I don’t think distance fields are the dynamic GI solution that they ultimately want, unless they can figure out how to expand it for that, it’s just that they were doing some stuff with it and it happens to actually work with doing a bit of GI and ends up as a better solution than things like LPV.

There are reasons against it like:

  1. Very long precomputation time. What is done in The Division is each probe captures cubemap (!). And the to get working time of day you need to capture cubemap at varying time. The you interpolate between cubemaps. It takes lots of time, works only with static geometry (moving geometry invalidate your cubemap, and moving geometry can’t contribute to cubemap).
    Technique can be simplified by storing only precomputed spherical harmonics, instead of cubemaps (in which case we need to precompute only once, and we have dynamic relight), but other limitations still apply.
  2. It have very severe leaking. If you have seamless indoor/outdoor environemnt it’s just not going to work, without working around it.
  3. It doesn’t work with dynamic geometry. Dynamic geometry can only receive indirect lighting, but can’t contribute to it.

Yes I read rest of your post, but limitations I have put up still apply. I’m not against precomputation, but if you have fully dynamic game, you can’t precompute anything or you are limited by what you have to precompute.

And I’m not entirely convinved about this ACU stuff. ACU doesn’t have dynamic time of day (which is huge step backwards compared to previous games), and amount of precomputed data that is needed in this game seem to be quite ridiculous ;).

is mostly developing these techniques to be used in Fortnite. Which is fully dynamic, where everything can be placed/destroyed at runtime. Any kind of precomputation will not work here.

Have the same issues as any other shadow mapping technique. Scales very poorly over many cascades and long distances. Distance Field shadows, were introduced not to get soft shadows, but have shadows over distances like 50k units ;).

I think we should focus on techniques which allow for fully dynamic environments with fully dynamic lighting.

Hahaha…coming from this perspective I totally agree with you^^ I was mostly throwing in these ideas because not too many games require this complexity of GI solution. But I still have a gut feeling that things like cone tracing ala Tomorrow Children will stay the exception only for games that really require it (and those games will most probably suffer on some other parts to make the power available, not saying they will be bad here^^). For a lot of the standard stuff, something like Enlighten or the Division/AC stuff should be enough.

I know about the distance stuff with the raytraced shadows and it indeed is very cool…but man…SOFTSHADOWS!! xD

Yeah…I dont know^^ I am basically just hoping to get a scalable dynamic replacement for lightmass that provides maybe different approaches depending on what kind of game you are working on at some point (does that sound too demanding? :D)

Cheers! :slight_smile:

EDIT: One last thing regarding “Epic is building stuff for Fortnite”. I really dont want to sound angry or offensive (**** I am far away from it :smiley: ), I am just being analytical and objective (and I really love working with the engine, otherwise I wouldnt do what I do :D):
We are licensing the Unreal Engine 4, not the Fortnite Engine! It is good when the development of a certain game pushes the tech forward, but its not good if this might also hold the tech back because time and ressources are spent
on developing a game specific feature thats not the best gain for the Engine as a product overall :wink:

Just to mention this perspective as well :slight_smile:

Hah I agree tthat solution for semi dynamic level would be cool (fully dynamic lighting, but for the most part static geometry),

Enligthen looks cool, but from what people around say, setting up assets for it is nightmare and precomputing can take as long as lightmass or longer (!). Dunno how true is that but…

The solution from ACU and The Division aside from what I have said earlier, is very game specific. For example to support time of day you need to interpolate between cubemaps, so you either support time of day by default (and then what if my day is 36h compressed to 1h instead of 24h compressed to 1h ?) or you left implementing rendering details of cubemap interpolation for end user… which is less than ideal.

We could only store spherical harmonics in probes (like it was in FarCry 3), then quality will be lower, but it will work for arbitrary lighting conditions.

  1. They aren’t cheaper since you can’t cone trace and must use multiple samples.
  2. Which doesn’t include skinned meshes, voxels work on anything.
  3. Actually the voxel creation is incredibly cheap compared to tracing
  4. They can be a lot smaller than Epic tried, sparse resources for fixed 3D textures are much cheaper in terms of memory
  5. Yes.
  6. And yes.

I’m rather sad Epic gave up so quickly. There’s easily more to do for Voxel Cone Tracing, they just didn’t get it right on their first try then moved on to something else. While cone tracing through a 3D Texture you can check the next mip level to see if its empty, then skip it if it is, which can produce performance and is in fact incredibly similar to sphere tracing/distance fields to begin with.

Will Voxel Cone Tracing GI just be the “one solution fits all” they wanted? Of course not, there’s no way to get a triple bounce needed for minimum GI convergence on today’s platforms. But its better than throwing out this hacky distance field stuff. Now I need to go back to doing Voxel GI myself instead of hoping Epic will do it.

I can guess… Volume Tiled Resources \ Volume sparse textures are coming… See you soon at Q1/Q2 2015 :stuck_out_tongue:

It’s weird to have people snooping your changelists =)