[Twitch] Subsurface Scattering and Ray Traced Soft Shadows Demos - Oct. 16, 2014

@ - Any word on a fix for the screenspace SSS shaders in VR? As you can imagine, good SSS is a critical feature for VR games/demos with human(oid) characters, since the plastic-like skin is a real immersion killer.

That’s a shame about DFAO - I am currently using LPV and RTDF shadows and it works/performs splendidly, but is missing that extra oomph without the DFAO. Any it might get revisted after more VR optimizations are put in (such as passing both eye transforms up)?

Edit: Also any ideas why it doesn’t function at all? Is it just turned off because of the performance implications, or is that actually a bug? Basically when in stereo mode DFAO just turns completely off. If it’s just disabled when stereo is enabled, that would help me in testing to see if it’s even feasible to use it, since I could at least use what’s there and not have to contend with bugs right off the bat. I wouldn’t mind fixing it if it is a bug, either, and supplying a pull request. Knowing if it is or not without having to dig through parts of the engine I’m not familiar with would be really helpful though. Thanks!

Thanks for answers!

Could it bit more expanded ? I mean do we really need parity between Forward and Deferred path ? Could have both, with forward being just simplified pass used for special cases, like translucency, or materials required custom lighting model (like foliage).

I dunno, if that wouldn’t break apart totally (very inconsistent lighting results).

That’s an interesting idea. There would be some problems in making thin geometry appear thicker in the distance field representation, due to self-shadowing. A related improvement would be to spit up an object into multiple volume textures if it gets too large, since volume textures don’t scale up well.

Another approach is to analyze when thin important details will be lost and automatically increase the resolution.

This is what forward plus translucency would hopefully solve.

I will let answer this

The first reason is that I just couldn’t handle any additional complexity when making the DFAO, it’s actually a small miracle it ended up working out. Take a look at the code to see what I mean =) There are a lot of passes and unique shaders doing complicated things chained together.

I did have to make some fixes which are already in for supporting DFAO on a view that doesn’t fill the render target, including with an x and y offset. So that should help for VR.

If you make it working with multiple views we would be happy to integrate it. It’s not intentionally turned off for VR, just hasn’t been implemented yet. I expect there will be a lot of problems though, some of the passes don’t clamp properly to the view which causes artifacts you can see in editor viewports (history reprojection esp).

We don’t need fully parity between the forward+ and deferred paths to have good lit translucency, but a lot of the features that make our opaque rendering compelling can’t be easily done in forward+, or at least not efficiently. (SSR, blended reflection captures, per-object shadows).

Any plans on Ray-Traced Distance Field Soft Shadows from Skeletalmeshes in future?

Thanks. It is exactly my point.
I mean, most surfaces, can be done using deferred path, and where the forward+ is concerned, we probably doesn’t need those fancy features (like reflections on trees).
I personally see using forward path, for very specialized shaders, which can’t be really nicely made in deferred path, and won’t really benefit from deferred benefits.
I mean, I could live without them personally (;.

Unless you guys are looking forward (no pun intended), to drop deferred in favour of forward+.

What are per-object shadows ? It’s something along lines of screen space self shadowing ?

Thanks :slight_smile:

, thanks for the detailed reply. It seems like it’d be a lot of work. I’ll try and look into it, at least until I get stumped :smiley:

I have one more question.

There is new card Open World in Trello.

Are there any plans to provide precomputed occlusion, which does not depend on lightmaps ? For example using spherical harmonics + probes. It would nice, to have since big accuracy is not needed, but large scale AO would be beneficial.

Using lightmaps for anything bigger than 1km across is, not really option. They just start to take way to much space.

I think we will solve that in a different way, with some analytical shapes on the skeletal meshes like ellipsoids. Otherwise we would have to regenerate SDF’s every frame which is very slow. For now though, note that you can enable ‘Use Inset Shadow’ on Movable components and they will use a per-object shadow to composite into the ray traced shadows - so skeletal meshes can cast shadows just fine, only they are not area shadows.

This is actually what we had in UE3 with the D3D11 path that the Samaritan demo was made with, things like skin would use forward shading while the rest of the world used deferred. It was a nightmare of complexity, you had to implement every feature twice. There were constant bugs where the paths didn’t match. And back then we had a fraction of the features that UE4 has.

The unofficial plan is to make a good enough Forward+ path for translucency, and then if people want to use that for opaque too we’ll make a hidden enable for that. Use cases like VR could benefit where the constant overhead of writing out GBuffers is too high, or for games that want more material flexibility like custom lighting models per-material.

Per-object shadows is the Unreal terminology for shadowmapping applied to a single object. It’s used to support movable components with stationary lights, inset shadows, translucent self-shadowing. The reason it’s hard to do efficiently with Forward+ is that there are an arbitrary number of them per light.

There’s already a prototype ‘r.DiffuseFromCaptures’ that provides precomputed GI without lightmaps, but it has a lot of limitations + leaking that you might expect from sparse probes. It could be extended easily to sky occlusion.

Agreed

@&: I’d be curious to know which high-end rendering features you look forward to implementing most personally?

Thanks.

Yeah I tried using r.DiffuseFromCaptures, but it either didn’t worked, didn’t worked the way I expected, or worked and I couldn’t tell any difference.
I retract that. I just tried with latest build and I can see the difference very clearly.
I can even say I see it way to clearly, it just booommed with light (;.

Static AO from probes would though! Could the system used for generating lightcache probes be, leveraged for generating probes for capturing and storing SH, without using lightmaps ?

Thanks for all the answers ! You have an amazing amount of in-depth knowledge on the engine. I can certainly see why Epic hired you in the first place!

Repost as this is much more relevant in this thread for potential answering:

Source: Unreal Engine 4.5 Preview! - Announcements - Epic Developer Community Forums

NOTE: Diff multi-line commits was answered and it turns out it was introduced in 4.4

When this stream will be available on YouTube? Missed it…

I typically try and get this uploaded within a day or so. In the meantime, feel free to hop over to twitch.tv/unrealengine. You should be able to watch there.

A direct link Twitch :slight_smile:

Thanks, for some season I couldn’t open recent broadcasts (thought it’s locked).

Neat! Hope someone will make a game or scene in this style…

Thank you for direct link , I missed this event :frowning: and i don’t know why I didn’t find this video.
Thank you so much now it 's work

I will try to answer what falls into my area.
As said is the video I am working on forward rendering as a side project. We simply have too many tasks and it has low priority but because I personally believe it’s important I work on it when I can make the time. At the moment I have a CPU culling data structure setup, passed to the GPU there it traverses through a small subset of lights to compute lighting. As a test I implemented anisotropic lighting and it works quite well.
A few more this need to be done before I can check it in: Support more than 32 lights, optimize data structure upload or compute it on GPU, expose shading code (currently it’s HLSL in common.usf which can cause a recompile of all shaders whenever you change anything), support lit for translucency
Ideally it would also support reflection environments and shadows but that might not be the case for the first version. We currently reuse the shadowmaps (light space depth) and shadowmask (screen space mask) a few times during the frame but for forward lighting we need to keep them around. You see it’s a quite a lot of complications - this is what was mentioning.
If possible the system should even reduce the light count where possible (hair lit by 50 lights might look like hair with only a first few bright ones) - to save performance.

We work on optimizations all the time - I just saved quite a bit on SubsurfaceProfile. Next I will optimize more in post processing. Recently Gil made good improvements on the multithreading side saving CPU performance.
optimized his distance field code - without it wouldn’t have performance. Performance is a wide field and we have a lot of subsystems. If you isolate your issue you are likely to improve by changing the content to be more efficient.

Cached shadow maps is possible but it’s often not a good solution. It helps the best case (no camera movement) but hurts the worst case (we need to render more because a quick rotation can reveal an uncovered area) and cases hitches (fast distant shadow casters are not rendered each frame). For VR we don’t want to drop a frame and there it would just cause more problems. Ideally we have the options for you to decide but we still can optimize the shadow rendering in other areas.

VXGI is like a faster SVOGI (The method we had and removed for performance reasons). It’s faster because it’s less adaptive (using larger volume texture blocks instead of a tree of many 3x3x3 blocks).
SVOGI had to touch a lot of areas and I would SVGI expect to be similar to be fast.
It might end up to be similar to LPV.
It might not run as well on non NVIDIA hardware - but I don’t know about that.
I personally don’t know how we want to proceed there.

Sorry no progress on cascades.

It differs:

But it should be much easier to setup. It has a bit less power in what it can express but if it does human skin well and is per formant we take that.
We could expose a few more properties but the GBuffer space is limited and we also want to compress it more for better performance.
I guess we extend once we see a strong demand.