Volumetric Fog - June 1 - Live from Epic HQ

WHAT

joins the livestream once again, this time to talk about Volumetric Fog, a beautiful new rendering feature in 4.16. With Volumetric Fog, you can create incredible ambiance and mood in your environments using the new Volumetric Fog! Varying densities are supported so you can simulate clouds of dust or smoke flowing through light shafts, and any number of lights can affect the fog.

WHEN
Thursday, June 1st @ 2:00PM ET Countdown]

WHERE
Twitch
Facebook
Youtube

WHO

  • Sr. Programmer, Graphics - @EpicShaders]()

Feel free to ask any questions on the topic in the thread below, and remember, while we try to give attention to all inquiries, it’s not always possible to answer’s questions as they come up. This is especially true for off-topic requests, as it’s rather likely that we don’t have the appropriate person around to answer. Thanks for understanding!

ARCHIVE:

Just newbie questions:
Can volumetric Fog be dynamic?
Is it good idea to use Volumetric Fog simultaneously with Dynamic GI to create dynamic god rays?

If you mean can the lights move, than answer is yes.

Really looking forward to this session, since I have really enjoyed playing with the volumetric fog since it first appeared on github but I’m sure there are aspects I do not fully understand yet.

Is this kind of fog usable for VR too?

Id like to know how we can project sunrays from every camera angle, without using a card or a fogsheet.

Please, it is also kind to leave a project sample for to check during or afterwards! Thanks!

Yes they should hopefully cover this since it’s come up a few times as a forum question already. I usually try to help by mentioning the Scattering Distribution setting but I’m not sure if that is the only reason some people are having problems with that.

Two questions:

  1. What are you working on now ? (;
  2. Any plans to make it less physical accurate (ie. visible rays with near zero fog).

Please discuss the performance cost of using Volumetric Fog in a VR scene.

A live to help people learn a brand new feature? Wooo! :smiley: COUNT ME IN!

Speaking of brand new features, what about a live on string tables? Pretty please? Obviously not string tables alone, maybe a general game localisation live :smiley: PRETTY PLEASE! (including c++)

EDIT: iniside asking the right questions :stuck_out_tongue:

Thanks for joining. I think I covered all the questions posted here so far, let me know if you have more.

I mentioned that the Noise node is very expensive for Volume materials and should be avoided. You can see this yourself with ‘profilegpu’, compare the cost of volume material voxelization with and without a Noise node. Sometimes you can make your own simple and cheap noise by just combining a few sine / cos waves. But sometimes you really want a tiling volume texture containing noise values.

has created tools to bake noise down to pseudo volume textures which are much cheaper than the Noise node. He covers this in a livestream here:Realtime Simulation and Volume Modelling Plugin | Live Training | Unreal Engine - YouTube
's plugin containing the Material Functions to do the magic:
http://unreal-engine-public-share-cdn.unrealengine.com/ShaderBits-GDC-Pack.zip

Hi ,

Thanks for being on the stream!

You mentioned this kind of fog lighting does not work well with moving lights like Flashlights. What would you recommend for having reasonably accurate “beams” for these type of moving lights (pref. something that couldn’t stick through walls, eg. a simple translucent mesh to fake the effect isn’t quite enough when viewed from a third person)

Watched the livestream today. I didn’t think I wanted to use fog, and then there was that moment with the Ross car. “Epic Live Stream. A series of spontaneously creative fortunate events.”
Great information. Great stream.

The standard approach for rendering a single lights inscattering is to render the backfaces of a cone around the light, and ray march through the shadowmap, adding inscattering for each unoccluded step. This is the approach I was going to use for volumetric fog before the current method worked out (camera-aligned volume texture with reprojection). Unfortunately this requires code changes because you can’t access the shadowmap in the material editor out of the box. It also doesn’t integrate with other translucency very well - you have to sort per-object so translucent hair that’s halfway through the flashlight will either be in front or behind the flashlight inscattering.

[QUOTE=DanielW;718105]
has created tools to bake noise down to pseudo volume textures which are much cheaper than the Noise node. He covers this in a livestream here:Realtime Simulation and Volume Modelling Plugin | Live Training | Unreal Engine - YouTube

I saw that a while back, but mostly dismissed it because I don’t own a VR headset (and therefore can’t use the tool :frowning: ). Are there any plans on making a non-vr version of it? And if not, do you have an alternative using maybe a Maya sim to get the volume texture somehow?

By the way, thanks for showing my composition using the volumetric fog feature in the Kite demo! :slight_smile:

The important part is baking an arbitrary 3d function (noise node output) into a pseudo 3d texture, which can be sampled very efficiently in a Volume material. The painting is kinda tangential.

Thanks! So basically this approach is not possible without a custom build of the engine? Out of curiosity, is there any intention to expose shadow maps to the material editor?

It requires code changes.

We don’t intend to expose shadow maps in the material editor at the moment - that would expose a lot of implementation details which are constantly changing. Every shadowing method requires separate handling - spot light shadowmaps are different than point light cubemaps, which are different than CSM, which are different than static shadowing, which are different than per-object shadowing, RTDF shadowing, capsule shadowing, etc etc.