Download

WorldPositon from screenposition?

Can any one give some math hints how I can calculate absolute worldposition from given screen position. I need this for translucent water plane so I can use Scene depth texture.

Do you need this inside your material? Because if you do there already is a worldposition node that you can use in the material editor, so there is no need to manually calculate it.

If you are doing this in a Material you can use Absolute World Position.

I am doing custom water refraction. Screenposition that I am using isn’t the same as current pixel. UV + depth is enough for this calculation but I am not 100% sure how to do it with UE4.

I see. This is a bit advanced for me.

But you should look at Post Process Materials. It should give you access to the depth buffer (or custom depth buffer).

There is not need for post process material. I can sample from Scene depth with translucency rendering. Problem is just to transforming from clip space to worldspace.

If this is a post process material, just using AbsoluteWorldPosition does return the location of each pixel in world space.

If this is not a post process material then you can just use “World Position behind Translucency”. But you mentioned you only need scene depth… if so you can just use the DestDepth node.

I need position from different pixels than current one. Let just say that I have. Screenpos + Offset and I need world position from this pixel.

Would the SceneDepth material node do the trick?

It will be needed for calculating the actual world position but what I need is the math how I transform uv + sceneDepth to worldposition.

I guess I still don’t understand what your scenario is.

Are you doing this in the Material Editor?

Are you trying to map light refraction? E.g. bend light coming through a translucent material?

Sorry, I’m a noob! :slight_smile:

Yes. I am trying manually calculate refraction while rendering water plane to avoid built in artefacts. Actual use case isn’t that relevant. I just need function where I can give UV and sample scene depth with that and use that to calculate world position. It’s quite generic method and in my own engine I can do it quite easily. I store linear view space z and then I can calculate viewspace position easily. And this can be then transformed to worldspace. I just don’t know how I do this with material editor in UE4.

ah I didn’t realize you were going for offset samples. You can get there with a bit of vector math.

The math to get world position inside PP material is like this:

WorldPosition = CameraPosition + CameraVector * SceneDepth / dot(CameraVector, CameraDirectionVector)

That is just how I solved it in the past with nodes. It isn’t obvious to me how to offset using the above formula so I did a cursory glance at the code and I did learn something thing that may help you. MaterialTemplate.usf line 1268 (old old build here):



	// Derive ScreenPosition from WorldPosition to avoid using another interpolator
	Parameters.ScreenPosition = mul(PixelPosition, View.TranslatedWorldToClip);


So notice that screenposition itself is simple a camera-local worldposition transformed from world to clip space. that makes sense since clipspace factors out the Z by dividing by it. So I would try to do the inverse of that with offset screen position but a clip to world transform. There are some material functions along those lines but I have not played with them much. You may even be able to do it as a one liner in the custom node. You may get lucky and TranslatedClipToWorld may already be implemented so you don’t need to do it manually with an inverse matrix. The screenposition parameters are float4 so looks like depth is still in there. I have little experience with clip space so I am not sure how the other 2 components are stored but it should be doable with some light fiddling. In material editor you could combine screenposition and pixeldepth and feed both the same offset and then do the transform.

edit the inverse is implemented but with a slightly difference name:

Here is how the TranslatedWorldToClip transform is defined:

ViewUniformShaderParameters.TranslatedWorldToClip = View->ViewMatrices.TranslatedViewProjectionMatrix;

A few lines down we find:

    ViewUniformShaderParameters.ClipToTranslatedWorld = View->ViewMatrices.InvTranslatedViewProjectionMatrix;**

So try ClipToTranslatedWorld since its the inverse of the above.

btw are you planning to raymarch the depth to find accurate refraction intersection? If so, sweet :slight_smile:

Nice. Thank you. I will have to run to work now to test this. At first I am not going to raymarch but it’s always been my ultimate goal. Most importantly I can calculate right opacity and inscattering for refracted ray. I can also fallback to non refracted ray when refraction would go above water level. This will also enable blurring and/or chromatic aberration for refraction.

ClipToWorld.png

This is what I have now but I still miss something crucial.

Yeah now its working. After cleaning its quite simple.
893f134ed02fa56fef2691123a50e236e27780c6.png

Edit: It’s seems that I can skip division by w after all.

Awesome. I never fully figured out exactly how the z and w are used in our clip space projection matrix. I thought it was doing the divide by w for z to return normalized depth which would require knowing what the engine uses to normalize the far depth. I guess at that stage it is still using real depth values, interesting. Glad it is working. I do wonder though if there is a small chance that to be absolutely correct you do need to modify the depth.

Have you tested by projecting a world aligned texture with no offset using that method?

I tested method by outputting distance between PositionBehindTransparent and non offsetted uv’s and calculating position with my method. Difference was just about 0.1mm. Testing with offset was bit trickier but it seemed to be correct and quite precise.

Thank you so much.

Does anyone know how this could be done with nodes? I need to do exactly the same thing, but i don’t want to be using custom code in my shaders.