[Twitch] Subsurface Scattering and Ray Traced Soft Shadows Demos - Oct. 16, 2014

Senior Graphics Architect Martin Mittring an Senior Graphics Programmer Daniel Wright join us to discuss two new rendering features in Version 4.5 - Screen Space Subsurface Scattering, and Ray-Traced Distance Field Soft Shadows. As mentioned in the 4.5 preview notes, ray-traced soft shadows greatly enhance lighting visuals and the overall lighting workflow, and the new subsurface profile shading model enables more realistic skin shading and other cool effects. Come hang out with us!

Thursday, Oct. 16th @ 2PM ET - Countdown]


Chance Ivey - Community Manager
Martin Mittring - Sr. Graphics Architect
Daniel Wright - Sr. Graphics Programmer

Let us have your questions below!


EDIT: YouTube archive is up here](

I have few questions:

  1. Forward Rendering - are there still plans to provide forward rendering path, for materials ? Or migrating altogether to it ? The current deferred path impose few severe limitations (like it is hard to create custom lighting models, with material editor only).

  2. Foliage rendering - any plans for specialized foliage shaders ? For grass (very simple and very fast), and for trees (more complex, but still cheaper than normal shaders) ?

  3. I dunno if you can talk about it, but how is DirectX 12, integration into engine progressing ?

  4. Any plans on implementing SSDO with GI ? I know screen space GI is not really good, but it would nice complementary solution for currently existing real-time GI.

  5. Optimization in real-time rendering ? I won’t be pointing fingers because we all known what tech is leading in that aspect.
    My question is will you guys focus on full real time rendering, after finishing tasks at hand ? Things like cached shadow maps (further cascades updated less frequently).

And of course immortal question, that I guess someone would ask anyway.

How is Dynamic GI progressing ? :smiley:
Two things that I’m personally curious about.

How invasive is VXGI integration. Could it be provided as, easy-to-maintain feature like LPV is, or it needs much deeper integration into engine ?
Assuming that NVIDIa is willing to give away integration to all UE4 subscribers and you can talk about it. If no, please just say no (;.

Any progress on adding cascades to LPV ?

#inside pretty much summed it up.

Echoing iniside’s post and adding another, perhaps noobish question - how does the Ray-Traced Distance Field Soft Shadows system interact with LPVs?

When you have tutorials on Dynamic GI?

Cool! I’ll definitely tune in.

I’ll just leave this here since it’s highly relevant to what will be discussed: Distance Fields in Unreal Engine 4 (Distance Field Ambient Occlusion & Ray Traced Distance Field Soft Shadows)

Neat! Can’t wait to check out the shadows!

Please include a brief overview on how sss maps in U4 differ from UDK inscatter/absorb maps.
I only just got the hang of those maps…dont make me go crazy a 2nd time ^^,

Reason I ask this is because ive only seen 1 sss map slot “via google images” NO inscatter+NO absorb.

Examples with purely sss map breakdowns would be beneficial to more than 1 person im sure.

[Question] Can you already share specific details on improved hair shading? Maybe which technique you plan to integrate.

Raytraced Soft Shadows work in VR mode, but DFAO doesn’t. Any ETA for a fix on that?

Edit: Also any ETA for a fix on the screenspace SSS effects in VR mode?

Oh nooo , where is Dana :stuck_out_tongue: (joke)

Very cool , will be there, at the moment, not question

What about blended root motion?

I noticed a new card was added yesterday, vote it up guys!: Trello

Which leads me to my question. What kind of dynamic GI solution are you looking at? Any details?

Could raytracing into a distance field be used for other effects in the future, such as reflective or refractive caustics? For example, could a car in a racing game create approximated reflections from the sun onto the road around it?

Are there limitations on the kinds of surfaces which can receive ray-traced area shadows? Can they cast onto skeletal meshes? What about translucent surfaces?

Are there any plans to support area shadows from objects which can’t currently be represented by a distance field? Could we, for example, create a small number of rigid geometric proxies linked to important bones which don’t render and only exist to create distance fields?

Do distance field area shadows have the same delayed, multi-frame accumulation as DFAO or are they always fresh with each frame?

Does the new subsurface scattering material have any way to simulate transmission from back-faces? If not, do you have any plans to support this in the future or would this require a forward rendering path?

Is there any active development of forward rendering? The road map has labeled it as “wishlist”; do you think this will ever become a priority?
Is there a chance we could see support for specific forward-rendered features such as anisotropic lighting and translucent specularity in the shorter term, with a return to all-purpose features such as the “light vector” node in the more distant future?
Is the temporalAA-based anisotropic specular from Brian Karis’s Siggraph presentation being considered as a viable solution for anisotropic lighting?

So many questions =)

Yes, for lit translucency. Some prototypes have been done on this but the big challenges have not been tackled yet. We are leveraging the benefits of deferred very strongly with 15+ features that rely on a GBuffer so it is a very difficult task to achieve parity in a forward pass.

Yep, we know we need to improve this

We are optimizing all the time, including an almost 2x speedup of the rendering thread on Xbox One due to parallel rendering efforts. That will make its way to other platforms. Many optimizations focused around VR and the extreme framerates and resolutions that requires.

It’s important to us, but has not made traction yet.

Not at all at the moment. RTDF shadows provide direct shadowing, LPVs provide diffuse GI. Technically the distance fields could be cone traced to provide occlusion for LPV propagation, but it would be pointless because of the diffusion error when storing lighting in low resolution volume textures like LPV does.

We’ll try to show some stuff in action, it takes quite some time to setup these things though.

Performance information is in the doc

I don’t think we will spend time on that, dynamic lighting methods are not a good fit for VR where you need super high resolution + 90fps.

Yes, absolutely. The SDF respresentation of the scene allows us to trace a cone from anywhere, to anywhere. The only limitations are that you can only get occlusion, not lighting. This means it can be used anywhere that you can separate the visibility and lighting terms of the rendering equation. That’s why it is used for direct shadowing and AO, those are occlusion only. For general reflections, a cone trace through the SDF gives soft occlusion, but you wouldn’t know what color your cone picked up. There are some possibilities here, it’s an area of research.

Any opaque pixels can receive. In the future we can make translucent surfaces receive as well through the translucency lighting volume, but that’s not implemented yet.

That’s something we are very interested in and would have a lot of uses, for example area shadows from dynamic objects with baked sky shadowing.

They are sampled adaptively in both space and time, so yes there is delay as they converge. There are console variables to control this (all starting with*).

Thanks for the questions guys

Are there any plans to improve the mesh distance field generation for thin geometry / small features in order to avoid holes? I’m well aware that you can increase the resolution but in many cases it would be sufficient to approximate thicker geometry.
Maybe add an option to let the user pick an additional, simplified mesh to generate the distance field for instead?

Thanks for the responses.

This is getting off topic, but are we likely to see any options for more detailed shadowing on translucency? The translucency lighting volume is okay for big irregular volumes like hanging smoke, but not detailed enough for more homogenous or opaque surfaces like sheer cloth, hair, thickly billowing smoke, or murky water. I don’t think you’d be able to see the effect of area shadows using the lighting volume in most cases.