Translucent Materials behind Other Translucent Materials Ignore Refraction

Hey there,

I’ve got the problem as detailed in this video:

In words, the problem is the following:

  • refraction works through translucent materials, until another material that also has translucency and/or refraction enabled is seen through the first translucent object
  • for example: the ocean in the distance appears to render on top of where you would expect the refracted image of objects below the hill. If you pitch the camera down so you don’t see the ocean, the effect looks as expected
  • another example is that, given the refraction of a translucent object, we’d expect other translucent objects to be rendered “bent away” from their actual position in world space (since that’s what refraction is) – however, as we can see when I look through the crystal tree, the other objects using the same Material are seen “cutting through” the refraction and are shown as if the Index of Refraction is 1.00

Things that do work after applying fixes / work-arounds (see below):

  • refraction looks mostly correct (some artefacting at oblique angles but that can probably be worked on some more)
  • translucent materials when seen through other translucent materials are still translucent
  • Objects get doubly refracted when seen through two surfaces that refract (even though the second surface isn’t where you’d expect it given the first object in your way)

Things I’ve already tried, and work-arounds I’ve had to apply:

  • Originally I was using Substrate, but the refraction seems completely broken in this system still - I was seeing things rendered twice through the translucent materials, and then it wasn’t even actually properly refracted in the duplicate image
  • So this is using the UE4 Materials instead; or rather, a Substrate UE4 Unlit Shading conversion node that feeds into the Front Material (see image of Material setup, below)
  • Tried various Lighting Mode, Translucency Pass settings in Material Settings; tried Lumen and not-Lumen, and no Global Illumination whatsoever; tried Hardware Raytracing and non-Hardware Raytracing; set PostProcess Volume to Translucency Type: Raster (see also: UE5 severe problems when rendering translucent and reflective materials simultaneously - #20 by Durghan)
  • Tried multiple indices of refraction (close to glass, water, ice; the refraction in this material is closest to amethyst, which is what I’m trying to emulate here)

Any insights would be kindly appreciated!!

P.S.
I will also describe the Material setup in words as I’ve seen images on this forum break over time during migrations to different forum software; so for future Internet peoples…:

  • Substrate Material
  • Select root node
    • Blend Mode to TranslucentColoredTransmittance
    • Lighting Mode to Surface ForwardShading
    • Translucency Pass: After DOF
    • Refraction Method: Index of Refraction
  • DistanceFieldApproxAO node with default settings (BaseDistance=15.0; Radius=150.0; Num Steps=1; Step Scale=3.0) plugged into root node’s Ambient Occlusion
  • ScalarParameter IndexOfRefraction with default value 1.545 (IoR of amethyst crystal) plugged into root node’s Refraction (Index of Refraction)
  • Substrate UE4 Unlit Shading (UE4US) node plugged into root node’s Front Material
    • using Material Function: SMF_UE4Unlit
  • 3VectorParameter plugged into UE4US node’s Color (V3)
    • value: 0.4; 0.5; 1.0; 1.0
  • ScalarParameter CoverageWeight with value 0.6 into Opacity (S) of UE4US node

Here’s another great example of the problem: based on the “true perspective”, we would expect the crystal Quinn, who’s standing next to the white box, to appear in a similar position in the warped perspective of the cone, cylinder, and sphere. But she’s not; instead, she only shows up if we look straight through one of those objects, appearing where she would be if our lens had an index of refraction of 1.00.

Compare to what it looks like when I use Quinn’s default materials, above. She shows up in the refracted images within the crystal objects where we would expect her.

My best guess is that this has something to do with render passes, the depth buffer and other graphics processing stuff like that, which I only have a very limited understanding of.

I’ll update you if I find out any work-arounds (or of course: the fix!!).

P.S.
Incidentally, I notice another problem in this image: in the sphere, we can see a near clipping plane or something similar “cut into” the white box inside its refracted reflection of the scene. Not sure about that one just yet.

Unreal engine’s transparency system has undergone significant changes since I last familiarized myself with it, so my knowledge is likely already outdated, but it definitely makes sense that you are having issues.

Unreal engine primarily resolves refraction in screen space, so it would make sense that multiple layers would cause issues, as it is only performing complex lighting calculations for the first layer of refraction to begin with. I know that lumen does support multiple bounces of correct reflection and refraction (at considerable expense), but I believe that only applies to reflections and not the GBuffer.

As for why the refractions are cut off, since the refraction method is screen-space, when it has to gather data outside of screen-space, it can easily hit a point where it simply doesn’t have the depth buffer information to trace into. There’s no real fix for that because it’s a product of the way refraction is calculated to begin with. I’d suggest using RT refraction, but it simply doesn’t work correctly anymore, alas.

1 Like

Yep, all of that makes sense, thank you! It makes me wonder if this is something we’d have to try and write a custom solution for, or if this is something that we’ll have to give up on and design around.

I’ll keep messing with it (and shaders in general) to see if I can get the look and feel that I want without causing immersion-breaking effects like this.

1 Like

true refractions may not work any time soon. the screenspace method has this “issue” that you would have to render back to front. so the farthest object is refracting correctly, then the nearer objects refract the refracted look of what’s behind it. basicly backbuffer slices/layers. this is expensive in terms of fillrate cause you have to overdraw. and it doesn’t help that glass is already rendered late in the pipeline, post opaque objects. hence why the player model is already in the backbuffer. no real way to fix that, than well… draw back to front.

1 Like

I’d have to believe there are workarounds, but my first question would be what your art direction/scene targets are, and what your budget for writing custom systems are. Are your requirements for a fully dynamic scene/lighting? Are you able to cache lighting? Can you make non-hero refractive effects cheaper?

The long tail problem of recursive reflection/refraction can be a serious performance drain, but it’s probably possible to make optimizations based on your content. All depends on what you’re willing to sacrifice.

1 Like

@jblackwell @glitchered thank you for your posts! These are great insights that give me some serious “stuff” to think about.

The long and short of it is: I will be wanting crystal materials like this in the game, and they can take up a large amount of screen space. From the worldbuilding of the setting the game is in, there simply are moments where these absolutely silly-large crystals dominate the view. That’s the vision.

I was messing around with one such crystal and noticing weird ways that the light and my own player model reflected and refracted through it, which was the reason for this post.


From a narrative, “feel” and artistic intent point of view, I absolutely cannot compromise on having these massive crystals in the scene. From a “playability” and player experience point of view (especially on, say, older hardware), I’d have to cut back on my expectations.

Writing custom solutions like back-to-front rendering or other solutions like you proposed is probably perfectly doable in the couple of scenes or so that we’ll want these stupidly-large crystals in, as long as we constrain the rest of the scene so there’s not too much going on there and allow the crystals all the “graphics budget” that I’d want them to have, emotionally at least.

This will be a fun challenge to tackle with some engineers. Thank you for your thoughts!

1 Like

The most common way to handle refraction besides screen space is cubemapping.

Here’s an example I made a while back. The biggest drawback to this method is that it is very expensive to update in real time. Static cubemapped refraction is very cheap though - far cheaper than screen space refraction as it does not even rely on translucency at all. Because of this, the technique is very common for ages. While it is a very old technique at this point, it had a surge of popularity recently due to being used in the liquid/bottle shading in Half Life Alyx.

What artists realized is that while people expect to see refraction they don’t actually know what physically accurate refraction looks like. In other words, avoiding artifacts is more important that actually having physically correct bending.

1 Like

If you can find any way to get normal map variation into your object’s materials, cubemap refractions will work very well- like @BananableOffense said, most people don’t correctly intuit accurate refraction, just an approximation of the scene around them, which cubemaps nail if there’s even a little bit of obscuring what the refraction origin is. Also, the per-pixel cost is dirt-cheap, so you could have a ton of these at once. I wonder if the custom LUT refraction approach outer wilds used might also be worth considering.

cubemaps are kinda “expensive”. i remember back then they used 3 memory samples to fetch a cube pixel. add a random value like a normal map vector and it will add a coordinate dependency. not as streamlined as you think.

i mean… gpus are fast now, still a bit of trouble on the metal deep down. trash of cache. it could add up.

Nah sure it isn’t free but I mean this is how Nintendo made metal Mario on the N64. I’d argue the majority of glass in games use opaque materials cubemapping in for fake transparency, or even more expensive tricks like parallax environment mapping. All of these techniques are still pretty much universally cheaper than translucent while integrating into the lighting much better.

UE5 transparency is pretty compromised unless you enable tons of extra cost features like high quality translucent reflections and order independent transparency.

I’m actually surprised when I see a true transparent shader in a game for glass instead of opaque fakery. Usually only because it is relevant for gameplay (such as windows you can shoot through).

God of War Ragnarok: Bottles are using fake transparency with cubemaps.
Half Life Alyx: cubemaps.
Dying Light 2: Despite being an RT heavy title, car and bus windows are opaque fake transparency with parallax. Even the glass on light fixtures is fake parallax offset shader tricks.
Almost every non-breakable window in every game ever, opaque, dirty and maybe a bit of parallax trickery that has been common since the early 2010s. BioShock Inf., CS2, Spiderman…
Last of Us: most water and glass are cubemaps except breakable windows.
I could go on.

I think something noteworthy is all of these games received significant praise for their their visuals and those who noticed the tricks often feel impressed rather than decieved.

But I do look forward to the day pathtracing is the norm and we can leave all this nonsense behind for most cases.

1 Like

Well-implemented cubemaps have been the bread and butter of excellent environment lighting for decades, and it’s remarkably flexible to boot. To your point, getting actual transparency out of UE is almost unaffordable in very many cases: the forward shading direct lighting alone is pricy, nevermind the sorting and depth peeling to allow for RT effects.

You’re making me think: is there any way you could fake a transparent effect in UE by having an opaque object that derrives reflection from lumen traces, but the ‘refraction’ is just the lumen probe lighting with some normal perturbing and blurring? It would draw as opaque, get accurate front-facing reflections, and still have a possibly usable simulation of refraction. I have no idea how you’d go about implementing that however.

Quite a while ago a dev created a branch that exposed standard reflection probes to the material editor, which allowed it to be sampled for fake refraction. I imagine it’s possible to do the same thing with lumen’s probes. They might be too blurry to do much with, but it would be fine for some cases.

Pretty sure Half Life Alyx also used parallax corrected cubemaps in particular. Seemed like it was using some kind of primitive representation of the rooms. Not a simple, single box intersection test like most implementations.

It should be easy to parallax correct cubemaps in UE5 since you can just trace the existing distance fields to find the intersection point.

I started messing around with that for an alternative reflection system in the past, but never finished it. The idea was to either dynamically re-light the scene with texture-space GI and DF shadows, or make an array of precomputed cubemaps for various times of day if that was too expensive.

It was based on the Mafia reflection system, except the idea was to avoid having to constantly refresh the cubes in real time by re-lighting static cubes dynamically. They used a round-robin update cycle, so each cubemaps only updated once every 8 frames or so.

Not sure if it would’ve been better than lumen in performance or quality but it’s interesting enough I might revisit it eventually.

For any big planar surface they’d be essentially useless, but for things like bottles and shards of glass I think they’d be more than good enough. They don’t have nearly enough angular resolution to do much more, but I feel like it would be possible to pull something off with them.

To your point: https://www.youtube.com/watch?v=XZw5CXAJ2hg

Their implementation was so good I remember trying out HLA and thinking Valve had somehow snuck RT reflections onto a VR headset. It is so accurate and such an artful implementation I am blown away. If we had a better way to generate scene proxies for cubemaps in UE, I think they’d be far more viable than they are now.

So, get your PBR buffers, your normal, your depth, and relight that dynamically as opposed to constantly doing full scene captures? That is very clever! Although as the Mafia 3 presentation pointed out, certain scenes are extremely difficult for depth cubemap reflections to solve, such as anything with a lot of thin geo or occluders.

Please let me know how it goes. With the right implementation I feel like you could have something very performant, but given how tight lumen is already I’m curious what your numbers would be.

This is one of the main reasons I stopped for now, but I think it has potential - especially for VR as HL:A demonstrated. Just too many projects when lumen was doing well enough for my current uses.

If it goes anywhere I’ll be sure to ping you.

i never played hla. but this looks like a lowres dynamic cubemap capture. there’s no way this is a parallax corrected static cubemap. for realtime capture you don’t need parallax correction.

either way… straying away from the refraction part of the question.

and… my thinking is… it’s hard. you could maybe have distance field refraction. the problem is you have to trace and refract thru the translucent or as somebody suggested an opaque object’s distance field and hit something that is essentialy occluded making this very hard. and you’d still not get multibounce, i guess.

an the other end… iirc… i got some multibounce raytraced refraction semi working in a wip daily build back mid last year. it was wip tho. not complete.

I’m extremely confident it is parallax corrected cube maps, both because the technical artist behind the famous bottle shader used static reflection captures for the bottle refraction (if they had a dynamic capture, they would’ve used it instead), and you can see the probe transition lerp when moving between areas. Also the TV can be seen reflecting itself in it’s original position - something that would not happen with real-time capture.

But it is definitely a more advanced parallax corrected cubemap. Previous versions of source only had axis aligned bounding box corrected cubemaps, whereas this has a basic representation of the scene somehow. My guess is either it does capture depth only in real-time (possibly using a cheap proxy environment and not the real scene - think like the custom depth buffer in UE), or they have a distance field or some other crude assembly of primitives that can easily and cheaply be tested for intersections.

I think you could get opaque multibounce to work in theory.

You would need to fallback to something like the lumen surface cache or a cubemap pretty often though, because like you point out, if the ray only bends a little the final hit is mostly gonna be behind the opaque occluder - but if it bends too much it’ll be off screen.

All you need is a way to tell if the hit was against a refractive material and its normal vector. Since distance fields are volumetric, hit detection would not be blocked by occlusion.

If there is no screen space hit, the normal can easily be approximated by sampling the direction to the nearest surface, but determining if that hit was also refractive is trickier. In some cases it could probably be inferred - but to be sure I guess you’d need a bit in both the gbuffer and the surface cache to test in screen and world space. Easier said than done…