Improving translucence and refraction. From an artist's perspective.

Hey, first off, I’d like to thank Epic Games for UE, and especially the open-door policy to UE4. I could imagine that the open-source nature will be a great help for people to implement and tweak features. This however is not much help for me, as an artist: and I see that UE4 is bordering on being LESS helpful than the previous version (UDK) as a stand-alone artist.


The first thing I wanted to do with the new UE4 is create a new and improved all-purpose water shader. I made HUGE strides in UDK making a vertex-paint water surface that had approximate refraction and appropriate colour wavelength leeching. I am underwhelmed as to the translucence options in UE4. Opaque surfaces have never looked better, and I mean that: UE4’s PBR is top-class and blows me away, but translucence seems almost forgotten. Now, I know translucence is one of those things that challenge fully deferred engines. I know that its still a work-in-progress, and that certain features will always outweigh others when dev-time versus results are concerned, but that’s why I created this thread! I have some ideas that MAY help in making future improvements to translucence. I’ll try to bullet-point the basic ideas, any coders PLEASE feel free to shoot me down if I have my head in the clouds.

Premise: Translucence is not Physically Accurate right now and is more limited than it was in UDK

http://www.blogtrw.com/wp-content/uploads/beautiful-buildings-city-crystal-ball-iron-sea-keane-Favim.com-50185.jpg


First off, Refraction in a sphere should be inverted on both axes, as in the example: in the engine, it is merely skewed.

PBR does wonders for surface shading, but it should also be adopted for sub-surface shading as well. To most this seems impossible, almost ALL implementations of ‘refraction’ in game engines (or renderers in general) either lean on screen-space distortion (like the UDK) or on ray-tracing (such as Mental-Ray offline rendering). Neither of these options are good, we’re all developers here, so I’ll spare the explanation on WHY that is, we all know the P’s and C’s. The current implementation in UE4 is better but it is still a heavily local effect. in reality, when moving to-and-fro from a refractive object, the refracted image changes based more on the angles the light approaches your eyes, not the distance you are from the object. The problem with the 100% screen-space method used in UE4 right now is that when an object takes up considerable screen-space, the refraction weakens and loose their realism (or illusion thereof). This is because (I’m guessing) the refraction is screen-space local, meaning that a value of refraction directly correlates to a solid value of pixels translated in screen-space. Thus, as you approach a refractive object, the ratio of pixels to surface area increases, but the distance the refraction translates the pixels does not.

I propose a solution to this issue that would employ pipelines and features that already exist in the engine to provide close-enough approximations of real-time PBR refraction.

Idea 1: Use the existing reflection environments for refraction beyond screen-space.
The reflection environments used for the PBR reflections and accurate specular could be used in a similar way, only along vector of light dictated by the refractive index.


In this example, when a refractive object is close to the camera, it should be refracting a lot of space outside of the camera’s FOV. In the engine, this is quite obviously not the case. The limitations of screen-space are showing quite vividly here, not to mention the above mentioned issue is present: the refraction ‘intensity’ dropping off as the object takes up more of the FOV. If the reflection environment could be blended in at this point, it could have a substantial effect on the perceived realism. Remember at incredibly high angles of refraction, even relatively small objects can noticeably refract off-screen if PBR refraction is adhered to.

Idea 2: Use the existing math for screen-space reflections as a way to better approximate screen-space refraction

This could greatly increase the power of refractive surfaces. By allowing the refraction to trace and draw objects using the same method used to create the stunning reflections, instead of simply stretching the rendered scene, we have the ability to create even more accurate PBR translucence. Near-surface objects would stretch and distort, and occlude parts of the scene that they otherwise would not with basic distortion, This level of detail in crystal-like refractive surfaces could set a new standard in realtime rendering. The SSR is already heavy enough, I know, adding these calculations to the rendering as well would at least double the rendering time for a PBR translucent surface. But this could have added positive side-effects, such as translucence being able to undergo the same ‘roughness’ calculation that reflections do (at no additional cost), allowing for texture-controlled blurred refraction.


** Lastly: Bring back the custom lighting model**

I don’t know if I’m missing something, or if the feature truly has been removed, but I cant find a way to link together nodes to make a custom translucence. This would be a noteworthy improvement, as nearly ALL of my most impressive shaders from the UDK were made by using custom lighting models, and were otherwise impossible. With PBR, we could have access to some amazing new nodes, such as the various deferred layers, SSR, reflection environments, the list goes on. Like I said, I may be missing something, but it seems like custom material lighting models are nowhere to be found for artists like myself.

Now, I’m aware that this is all coming from an artist with LITTLE experience in coding, and none in the realm of HLSL, but I hope this at least encourages discussion.

This is a great post! Thank you. I think your idea of using a reflection captures and allowing blurry refraction is really interesting. The easiest change would probably be to trace the depth buffer like you mention. This would still be screen space, wouldn’t support rough refraction but would be a ton more accurate. If there are multiple layers the ray won’t be able to continue it’s path correctly. It would have to accumulate the total refraction direction in screen space and then do the trace which isn’t accurate but would be better than now. Using reflection captures to take it one step further with roughness and off screen refraction is really cool.

Custom lighting control is an issue with deferred shading. It isn’t possible without support for forward shading. That is something we are working on slowly. Some form of it is required to support high quality lighting on translucency.