what happened to volumetric lighting / fog

This thread is talking about volumetric lighting effects, not dynamic lighting / global illumination.

Well anyway if I don’t remember bad there was other post too Volumetric Light Shafts requires some attention ASAP. - Feedback for Unreal Engine team - Unreal Engine Forums :stuck_out_tongue:

Sorry my bad. I always considered these two be very closely related.
Can you point me to the “real thread” for the Real-Time-GI ? I want to show my interest there.

Thanks.

I know you’re replying to someone talking about lightmaps, which isn’t the topic but what you said about this thread being “about volumetric lighting effects, not dynamic lighting / global illumination” is kind of misleading on it’s own. I don’t think you realize it, but true volumetric lighting uses a lot of similar coding to what you’d find in a real-time GI solution… It’s essentially GI passing through translucent mediums that can absorb/scatter info from the rays… For volumetric lighting/fog/etc, you need to voxelize the scene and for real-time GI, you’d have to do the same. You then have to ray march through the voxels and in each voxel, you have to compute things like extinction, scattering, color absorption, attenuation, etc etc.

So yes, it’s all tied together unless you’re talking about making fake volumetric lighting effects; in which case, you can readily do that with the material editor+cones/sheets/etc, stenciling, screen buffers, post processing and voodoo magic.

True, but I think it’s good to stay on topic for this specific request. We could open another thread for a real-time GI solution request though.

Yeah right? I guess it will reach 2k before the end of the week, which is crazy.

I head from Nvidia that their Volumetric Lighting would be integrated into UE4 (in a similar fashion as their current VXGI branch) and that they should launch it at GDC, if everything goes ok. So let’s hope by end of Feb we have some great news…

There are a lot of different ways to achieve volumetric lighting effects, but the key takeaway is that you must generate some kind of volume. One such example of a technique that Epic could employ would be to extrude directly from their existing shadow maps - there is no need to voxellise the scene and many volumetric techniques don’t voxellise as it’s simply too expensive unless you’re doing it already (for example, if you’re doing it for dynamic GI)

We should pull galaxyman over to this thread… maybe he can tell us more. Like i wrote before he has experience with the nvidia branches.

This thread is about volumetric lighting/fog but i dont care to mix it up with dynamic GI :cool::o, since it is even more wanted / important than all the volumetric effects.
We need all that good shi### in UE4 :cool::wink:

Yeah it is… especially when you think about it that this thread is live for only a view days.

Epic please!!! wanted features!!!

What you just described is not true volumetric lighting then… It’s pseudo-volumetric post process lighting. The only way you’re going to create a true “volume” is to voxelize the scene. Like I’ve said before, if you want to fake it into the game, there are dozens of cheap ways to do so; none of which need Epic to implement features for. The problem with almost all of them is that they can be broken under certain circumstances and aren’t that accurate. Deferred rendering is nice, it just doesn’t play well with effects that need influence from outside of the screen space. There’s a reason why there has been an insane amount of research put into this sector of real-time rendering…

Extruding from the shadow maps generates geometry; you might render the final output as post-process, but it is not strictly a screen space effect (ergo, can be influenced from outside of screen space). It’s a fast approximation that works reasonably well. I believe this is how Nvidia Gameworks implementation works.

Yeah we talked about that kind of solution earlier in the thread. It’s still not that accurate of a solution though. It’s just combining that with a depth buffer and using some falloff formulas; to do an overlay/multiply in post process. Kind of similar to how you would do the same kind of task in photoshop. And while it can work off screen, it will still only have influence on fog if it’s VISIBLE(as in the shadow geometry) to the screen space. The second the geometry goes off the screen, it won’t have any influence on the atmospherics anymore. This could lead to a lot of detail popins, within things like atmospheric fog, as you turn. One second the puff of fog has a certain color/look to it, turn a few degress and POOF! it pops in with the color changes. Hence why they are pushing for foward+ with VR engines.

EDIT: I forgot to mention that you can still get off screen information, it would just require a forward pass on top of a deferred pass; which isn’t super efficient.

Could you point even one general volumetric light system that is used in realtime application that voxelize whole scene. There are multiple battle tested algorithms that does not do that.

2014/08/bwronski_volumetric_fog_siggraph2014.pdf
https://software.intel.com/en-us/articles/ivb-atmospheric-light-scattering

I have also coded one volumetric system that uses 3d noise, raymarching, sun shadow map, terrain radiance map. No voxelisation needed.

Go to slides 11, 35, 39, 40, 52, 53 and 55 in your first link

Go to page 16 and read brown box especially the part talking about epipolar(which plays into the third link), 24, 25, 42 in your second link

The third link is a different approach, but you “Render the scene from the camera and from the light source.” So it sounds like deferred+forward rendering and you’re still ray marching. It gives decent results, but you can’t get results like what I linked on the first page (see page 16 of your second link)

Any time there is “ray marching” going on, it’s still similar to voxelizing because the ray travels X distance, pauses, reads a sample of the surrounding area, performs a bunch of math and/or scatters rays out, travels X distance again, pauses again, reads that new data, performs more math with it and/or scatters rays out, and so on; until either the ray runs out of “energy” or it hits a solid surface that kills off the ray. Voxelizing is just a means of “LODing” the system down; to make things more efficient. Helps a lot for things that are further off in the distance that don’t need as much color/shadow accuracy. So up close, the system might use 50cm cubes, further away, it might use 100cm cubes and really far away, it might use 200cm cubes or ignore the rays all together.

So in order to pull off good volumetric lighting, you’re going to need fog/clouds/particles/etc that can react realistically to the lighting, which means you’re going to have to raymarch and/or voxelize the scene. Basically, it’s all a part of the same package.

You are mixing terms here. These volumetric techniques does use voxels but only for participating media. They don’t have to go all scene geometry and voxelize that but just approximate density of non solid objects. That can be some combination of height fog, fog volumes, noise and voxelisation of particles. VXGI is doing actual voxelisation and that is quite expensive but not any of these volumetric techniques need that.

It would probably be done at the world level of things, not the individual component level of things. Picture a base 3d grid to the scene, that doesn’t move. These would essentially be your voxel cells. You can move objects around in it and they will occupy certain cells within that grid. If the grid sees that geometry, from something like a barrel, is occupying the majority of the voxel cell, it could write it off as a “geo” cell; where it would use different/cheaper math on rays that travel into it because it’s not worried about it partially absorbing the light and so on. In most cases, the ray would hit the geometry and stop or maybe scatter. For things like a smoke particle emitter, you’d probably want to voxelize it on the individual level though(for quality purposes). Point is that you wouldn’t have to go through and manually voxelize every single individual object in a scene.

Again, there are a lot of similarities between a real-time volumetric lighting(as in a realistic version and not a cheap post process approximation) and real-time GI. They both have to raymarch, sample data, handle all of the light math like scattering/absorption/etc, and so on. The big difference is that when the ray hits a wall, the GI engine keeps going with the calculation for another N-bounces. There is a big reason why volumetric renderings, in engines like Mental Ray or V-Ray, take so long to render out… It’s an insane amount of computation. Yes, I’m comparing cinematic production renderers to a real-time game renderer because they still are attempting to perform the same types of math, except one is using a faaaaaaaar higher level of precision than the other. People want to complain that things like VXGI or volumetric engines are so expensive, yet they demand to have real-time versions of them; without realizing the very mathematically complicated roots of what it’s trying to solve.

Volumetric systems use camera(frustum) orientend voxels aka froxels. This has to be updated every frame because camera moves. This is reason why scene can’t be voxelized just once but every frame. I linked three actual volumetric rendering techniques that are used in games and not single one actually voxelize the scene.

VXGI cannot be optimized by no one else but Nvidia as source is locked down. I have faith that Epic engineers will invent most optimal way of volumetric lighting that is good for performance.

Froxels are just the way of handling it in a deferred rendering engine; due to only having screen space to work with. Hence another big reason why there is a push for forward+ rendering engines. And no, you linked two that use voxelization/froxelization, where I showed every single page that referenced it, and one that used a very loose approximation that cannot handle varying densities of volumetrics or multiple light sources aka a cheap hack with “okay” results; if you’re going by 10 year old standards. Go through and actually read what I said and linked…

And AE_3DFX, yes, Epic will solve it eventually, but they are first going to finish their forward+ engine and then begin working on better volumetrics and GI. Key point being that it has to go forward+ first. Only having a handful of screen space buffers is just too limiting on things. There are only so many render hacks and post process tricks that you can use before you run into a brick wall; as soon as you need information that’s off of the screen or as soon as you need transparent/translucent materials(has to be rendered in a separate forward pass). Deferred rendering was amazing for a time, while hardware caught up to the needs of a better forward rendering engine, but it’s showing it’s age now and is limiting the ability to implement these kinds of effects.

Froxels don’t have anything to do with deferred or forward. Those are just frustum oriented voxels. Those links I provided does not voxelize the scene but only participating media stuff like particles and fog.

Frustum implies screen space… And yes, they still voxelize the scene, but only store relevant information that will be needed; which again, I’ve already stated multiple times… Even if they are just ray marching, it’s still very similar to voxelizing a scene except it doesn’t have the memory overhead; for continuously storing the information of all the voxels. It still marches X distance, reads the information, does intense math/other operations, marches forward X more units, reads the information, etc etc. Sounds pretty similar to marching through voxels because they are one in the same, except one has simplified sampling…

The whole point of voxelization is to simplify things for vertex lighting/shading. It’s like using the color dropper in photoshop: instead of having it set to 1x1 pixels, you can set it to something like 10x10; for an average color. Using the same photoshop example, let’s say you have a 1000x1000 image that you need to run a heavy “per pixel” formula filter on: Using a per pixel filter on it would mean it had to run 1,000,000 times. If you set it to take 10x10 pixel averages and then run the filter, it would only have to run 10,000 times; or 1% of the cycles it had to run before. This is very analogous to why games are pushing toward voxelization for VL/GI.

Anyways, I’m done spamming this thread with all of the arguing about it. The point is, you will likely see more VL/GI stuff worked on and implemented after they finish up with the forward+ engine.