Just throwing it out there, I have an asset available that sort of “hacks” volumetric light shafts by stacking frustum-aligned planes with a lit-translucency shader model:
It’s not the fastest solution available, but it works.
Just throwing it out there, I have an asset available that sort of “hacks” volumetric light shafts by stacking frustum-aligned planes with a lit-translucency shader model:
It’s not the fastest solution available, but it works.
I bought your volumetric lighting solution, just out of curiosity.
It looks actually good, but feels and looks a lot different from “real” volumetric lighting solutions… like for examle Cryengines V version of volumetric lighting.
Yep, and the performance is absolutely horrid. A solid raymarched solution can get down to about a millisecond on modern consoles, with a decent amount of lightsources. Geometry extrusion takes up more even on NVIDIA’s newest, geo performant cards, and isn’t nearly as accurate/friendly. Raymarching gives OIT so far as blending transparencies with the volumetrics are concerned, the light scattering term can also be reused for lighting particles at virtually no cost. You can even use it for volumetric self shadowing of particles.
That being said, as getting to be performant requires temporal anti-aliasing and a millisecond on consoles is far more on mobile platforms its not the most cross platform feature ever. Nor is it useful of course across all titles or anything.
Interesting panel about the rendering tech of INSIDE, including volumetric lighting. Which they talk about at 9m16s:
You are right about everything else but geometric extrusion accuracy. It’s most accurate shadowing system for volumetrics that there is. No undersampling artefacts. It’s also can’t be used with non uniform density participating media.
I meant for actual representation of participating media, not sample count
Just becasue a feature is not cross platform capable it doesnt mean it should be ignored. Volumetric lighting works well enough on PC and Consoles… So the solution is to wait another 3 or 4 years or even longer until mobile platforms are capable to render this features ?
Sorry this makes no sense for me.
That’s a good find! It works amazingly well for their game, but you have to keep their 2.5d style/direction in mind. They have supreme control over what you can see or can’t see. I’d imagine they have pretty extensive load/unload control as well; which would allow them to keep ram usage to a minimum. Due to the minimalistic models/textures, they can push that load down even further.
Some parts of this kind of solution might work for a traditional 3d game, but probably not. A good example is when he’s going over the flashlight scene with all the dithering stuff for it. They do a lot of stenciling, ray simplification, ray marching, down sampling, dithering and AA; to pull the effect off. However, look at the amount of screen space it’s occupying. Just eyeballing it, the majority of the effect looks like it only occupies around 10% of the screen space. Granted, I’m sure there is a lot more going on, in the overall screen space, but ~1ms sounds fairly expensive for something that isn’t taking up that much real estate on the screen. Again, this gets into the game direction aspect of things. It works great for them because they have the room to play with it. Would this work for a large realistic first-person outdoors scene? Probably not…
I agree, mobiles is still meant for communication and simpler gaming not for console or pc quality gaming, no point of keeping Volumetrics feature back because of mobiles.
Also–it looked to me like Gears of War 4 has volumetric lighting, and since that’s on console and it’s UE4 as well then it’s definitely something that should be done
The usability of volumetric lighting on current gen hardware even for a broad use, full screen scenario is unquestionable. Purely by the fact that many games are using it for years now.
Killzone: Shadow Fall is a prime example here:
https://www.guerrilla-games.com/read/taking-killzone-shadow-fall-image-quality-into-the-next-generation-1 starts at page 65.
Other examples are Lords of the Fallen:
and The Order 1886:
[QUOTE=The_Distiller;666654]
The usability of volumetric lighting on current gen hardware even for a broad use, full screen scenario is unquestionable. Purely by the fact that many games are using it for years now.
Killzone: Shadow Fall is a prime example here:
https://www.guerrilla-games.com/read/taking-killzone-shadow-fall-image-quality-into-the-next-generation-1 starts at page 65.
Crazy what they get out of 8 samples:
https://youtube.com/watch?v=0MilN7jKK9c[/quote]
Yeah, they downsampled like crazy but it’s still only a deferred(frustum based) approximation. Looks pretty good in gradient fogs, even with the dithered 8 sample video, but I don’t think it can really handle varying densities that accurately. There is nothing stopping people from doing a little googling and quickly implementing this stuff into their game.
No one here that tried NVIDIA’s solution for volumetric lighting in UE4? If im not wrong they used it in Fallout 4 ?
IIRC there was a user who had implemented builds of NVIDIA Gameworks tools in UE4, but NVIDIA’s Volumetric Lighting was not included in the list of available GW tools:
I think it was the user Galaxyman “don’t remember the name” who tried to implement the volumetric lighting branch into UE4.
I try to send him a mail, maybe he will write back or give us some informations…
UE4 roadmap was updated today, sadly Volumetric Lighting/Fog is now marked as backlog 2017.
You sure it has been updated? I don’t see any new cards on it. Just the old stuff (Jan - March) time frame.
Volumetric fog was based on bsp-tree, bye bye.
Unreal4 is not a AAA engine.
According to this post, what’s on there is up to date as of today.
On the topic of volumetric lighting in particular though, looking over the GPU Gems entry on volumetric light scattering, it occurs to me that something similar to the method used there could be used to achieve non-screenspace light shaft bloom in UE4 using a post process material. Instead of drawing a ray from the light source screen position to a screen-space pixel position, you could obtain a normal by following along the directional light’s world position forward vector and cast it out from a dictated “frustum start” location (or range of locations in screen space) to any given screen space pixel. Then you could use the same method of checking for occlusive pixels along those rays, and additively sampling the scene along them, just as it’s done in the shader example provided. So the post process would still be illuminating screen-space pixels only, but it would be drawing the lines of illumination from world-space ray casts, without the need for the light source to be in the scene.
I could be totally wrong though, and it’d take me forever to try and sort out the actual math involved with this, but I’ll take a whack at it this evening.
Here’s a panel about NVIDIA’s solution: