Faking dynamic lighting in Bullet Train: infos about "World Position Behind Translucency node"?


Watching the livestream on the making of Bullet train - part 2, I hearded Ryan Brucks talking about a material node called “World position behind translucency”, used to make a material to fake dynamic lighting inside the train.
Further in the video, he showed a simplified material to show how it worked. This simplified material is a Surface Opaque material, so the “World position behind translucency” node is not used.
I would like to reproduce this effect with a translucent material and searched for documentation on this node. But it seems there’s no information about it in the UE4 documentation and, searching on the web, i found nothing.


  • Does anyone know how to use it or point towards samples about using it?
  • Does anyone know or could figure out how this “translucent version” of this material was done?


Bumping this, because I’m curious about the same thing. I was just going through some of the old Twitch stream archives on the Epic youtube channel and stumbled across this and found this thread by doing a search for “WorldPositionBehindTranslucency” but nothing much came up.

One thing that did appear was an old Unreal Answers thread that Ryan Brucks commented on back in July of 2014:

*"Your post reminded me that I should have also included some examples on how you can use the depth texture on a “WorldPositionBehindTranslucency” shader to create a pseudo decal using a regular mesh…

We have documentation written for these elements that should be out tomorrow or Thursday."*

It appears he didn’t get a chance to post the documentation, but I’m definitely interested in seeing it and a sample material setup as well. Love that Bullet Train making-of video though, lots of cool ideas and techniques for optimizing the visuals.

Hey guys,

Sorry its has been a scramble up to gdc this year.

Which part of the video were you wondering about? There should be a youtube version floating around we can use.

For the final materials they were all either translucent or modulated. If the material you saw was opaque, perhaps it was showing just the fake projection math part and not the worldpos-behind-translucency?

Generally for worldposition behind translucency, you just subtract ObjectPosition of the mesh you use to render it and that gives you centered coordinates. Then divide by the bounding radius and then bias scale by 1 and 0.5 to bring into 0-1 range. Then you can component mask RG or whatever you want to get 2d texture coordinates, or transform into a custom Z space using “transform to Z vector” for custom projections.

Fwiw, in that answerhub post the documentation I was referring to was about the Render to Texture Blueprint.

That’s it. It was the part where you explained how to stretch and bend the texture.

Not sure to get it. Do you have any basic material setup screenshot example around?

Going to bump this again. I have been doing a lot of work in Unreal for VR over this last year and would LOVE to incorporate this fake lighting method into my workflow, especially since our project is using forward rendering yet I want to create the illusion of dynamic lights! A screen shot of the bullet train fake dynamic light material would be amazing! I am going to try to dissect and recreate it from your explanation on Monday Ryan, and if I have any luck I will post some shots of my material here.

Hey Ryan! The stuff you go over in the video is just the UV manipulation, which is cool, but I am familiar with manipulating UVs already. The part that I do not understand, and the part that I cannot find any documentation on, is how to properly use the WorldPositionBehindTranslucency node you reference that allows you to (what looks like) use an object as a projection space like a decal but for fake light. Based off your description in this post this is where I got to, but cannot figure out how to get it to work. Even with stripping out the math from the function and plugging it in manually. Also, the material tells me it is broken when I Apply it, however I get no information in my stats as to what is broken or why like I usually would. Any help here would be most appreciated.

Just for reference I am trying to use this method for car headlights, the planes I have currently do not cut it and with performance for VR they cannot be real dynamic lights.

I am pretty sure I talked about the world position behind translucency in the twitch. At one point I was messing with the projection of the shader that does the lights coming in the window.

The formula for custom decal projection is like this, with accurate FOV control:

Awesome thank you! I also found this thread you were helping a fellow out with: Where to start to achieve this effect? - Rendering - Unreal Engine Forums

With that material reference this is what I have now! Which I know how to attach the location and rotation by the vehicles in the blueprint, so once I mod this mat to have two headlights like I want it should be easy peasy from there out.

Here is where I am at now:

Thank you so much for your reply Ryan! The set up you just linked will be useful for some of the other things we are doing in the environment as well. I really appreciate you taking the time. :slight_smile:

Cool. The only part you probably want to add now is the N dot L. Should be able to get SceneTexture::WorldNormal and dot it with the specified lightvector to get nice falloff.

Interesting, we will have to explore that approach. My friend said our setup is a bit inefficient with the square root, but visually I am pretty pleased with where we are at.

Here is the current state of our fake headlights:

@RyanB, we used the SceneTexture::WorldNormal node and got the material to interact with the normals in the world, which looked SO GOOD! However, I am 99% sure we are going to stick with forward rendering for this project, which means that node does not work/does not get the normal pass it needs to work. :stuck_out_tongue:

thats too bad. there are ways around it though if you really want, such as deriving the normal from the scene depth using ddx and ddy. the function ‘derive tangent basis’ has some clues how it would work.

can you elaborate on that some more…how you got the normal to affect the color in the emissive channel

Yes I would love to, I was talking with the team and we are going to take a crack at the bypass system to get the normal interaction working in forward rendering like Ryan suggested. However, we are currently loaded getting a sprint completed for GDC for this Friday, so we may not have time to invest into exploring this until after that. :stuck_out_tongue:

I will post it here when we do tackle it.

This shader is used to project on the depth pass, IE fake light. The SceneTexture::WorldNormal is a material node (SceneTexture and in the detials panel set it to worldnormal). If you dot product the world normal with the direction of the fake light it will make it light normals faceing the same direction as the light, just minusone it and mult it into opacity to make it reverse. That said you can only do this on a transparent material (or a late rendering object not writing to the depth buffer) as it will render after the depth/worldnormal ect are rendered and in the buffer.

This works, but it shows off the geo a bit.

The reason for that is because positions across facets are linearly interpolated, so when you sample the derivatives they are flat, reflecting the linear connection between points. I am not aware of any way around that other than sampling the scene texture with some offsets and averaging the results. but it will be expensive to get high quality, especially on mobile.

The better way to do this on mobile is to build the light into a material function, control it via MPC, and place the material function into every material in your game. Then you get access to vertexnormal. If you use very few master shaders this is no problem. Using few shaders is good content discipline in most cases anyways IMO.

I’ve followed this thread, and it looks really cool!

Just one confusion that, it seems the same way we writing shaders accepting point lights with no shadow, is it really more efficient?