Check if pixel is occluded with depth mask

I’m trying to check if a pixel seen from the player camera is occluded from the view of a scene capture camera.

I’m rendering a custom depth texture from an scenecapture actor which remains stationary in the world.
Now I would like to transfrom a visible pixel world location to the uv of the custom depth map.
If I have the uv position of the pixel to check, I can compare the real distance between the pixel and the capture component with the depth value of the depth texture.

How can I transform the world location of the pixel to the depth uv?

Try looking for a material function ‘TransformToClipSpace’

Ah I think I may have misread your diagram. Transform to Clipspace will only work if the Depth Buffer is from the same view as the player camera. It would basically require doing the same math as a clip space transformation but using the vectors from your scenecapture, not the camera.

You could also simply try raytracing your depth map. It would potentially be a bit slower since it may require a few lookups but the math will be easier.

You should build worldToLigthUV space matrix at CPU and then send it to material using few Vector parameters. This is exactly same problem than with shadow mapping so there should be a lot of tutorials out there.

Thanks Ryan and Jenny for pushing me into the right direction.

I tried something out, but it did not work correctly. Here is my thought process as I am not sure where the error could be, hopefully you can spot something wrong:

First I created a child class of SceneCapture2D and added a GetPerspectiveViewMatrix() like this:


FMatrix ACustomSceneCapture2D::GetPerspectiveViewMatrix()
{
	FMatrix NewPerspectiveMatrix = FPerspectiveMatrix(45, 512, 512, 10, 5000);
	FMatrix NewViewMatrix = FLookAtMatrix(GetActorLocation(), GetActorForwardVector() + GetActorLocation(), FVector(0.0f, 0.0f, 1.0f));

	FMatrix PerspectiveViewMatrix = NewPerspectiveMatrix * NewViewMatrix;

	return PerspectiveViewMatrix;
}


For now I hardcoded the values for the construction of the perspective matrix. And here are my first questions:

1. Is the near clip plane of a scene capture 2d the same value as the one we define under the project settings/ rendering tab?

2. Is the target point for the FLookAtMatrix constructor sufficient as ‘ActorForward + ActorLocation’?

Then I created a blueprint of my custom scene capture actor and filled 4 vector material parameters to use them in the material editor. In the material editor I multiply my PerspectiveViewMatrix with the position of the pixel:

3. Is my assumpion correct that a vector3 is not in row mode?

4. Is my matrix multiplication wrong?

The x and y components of my multiplication should give me values on the clip space plane. Then I need to adjust these values to stay in the uv of my custom depth texture:

My next step is now to convert the value of the depth texture back to my real depth value. I know that UE uses reserved z rendering, so I assumed to subtract the texture from 1 and then multiply it with my far plane value.

5. I assumed the depth is linear distributed. Is that correct? How do I reconstruct the depth value?

Then I would compare the distance of the scenecapture and the pixel with the depth value.

That didn’t work.

Hopefully someone has a clue. Thanks again for helping.

I’m not quite sure but I think you are missing the part that divides by the z depth to move it into clip space? I believe what you have is like an isometric projection that will not take into account perspective/FOV.

matrix multiplication looks ok to me but I could be wrong :wink:

Ok my matrix multiplication with the pixel position looks like this now:

Still no right result.

I am reconstructing the depth value of the depth texture with this formula: (1 - Depthtexture.r) * FarClipValue. But this gives me linear depth. How should I get the correct depth value? I’m a bit confused with UE’s reversed z rendering.

what is that row3 vector? Also looks like you simply did a divide by whatever that vector was since your v4 alpha was 1.

I think for me to be able more help I’d need to set up a similar test case and start entering and previewing values. Sadly I won’t have any time until next week as I am under deadlines and already out of town thursday-sunday.

are you using the function ‘debug float 3 values’ or ‘debug scalar values’? I would do that and replace worldposition with a VectorParameter, then set your render target to point along cardinal axes at first, then read back what the transformed position was. Then for example just move the vectorparam value such that the depth was increased but not its orthographic position relative to camera and see how that affects the result. Once you do that, it makes guess and check much more effective here. And honestly I would pretty much be doing guess and check to solve this.

also, debug your matrix rows individually to make sure they are indeed what you set, and orthogonal etc.

The row 3 vector is the last row of my PerspectiveViewMatrix. Then I do a dot product between my pixel position (x,y,z,1) and the last matrix row (a,b,c,d) = xa + yb + zc + d1, which should give me a float1 value. I got this information from here: xna - Converting world space coordinate to screen space coordinate and getting incorrect range of values - Game Development Stack Exchange

Thanks for the debug hints! Furthermore I just noticed that UE4 also has a constructor for a Reversed-Z-Perspective-Matrix. I’ll keep trying.

Thank you for the help so far, I wish you good luck with your deadlines!

Ok after some time I’m still stuck at this problem.

I looked in some files (CameraStackTypes.cpp, SceneCaptureRendering.cpp) to find a clue how the matrices are initialized. There I found something strage:


ViewInitOptions.ProjectionMatrix = FReversedZPerspectiveMatrix(
				FOV,
				FOV,
				XAxisMultiplier,
				YAxisMultiplier,
				GNearClippingPlane,
				GNearClippingPlane
				);

The near and far plane values are the same, GNearClippingPlane. Very strange.

I tried this and other things but with no success.

Don’t suppose you ever managed to solve this? I am trying to attempt something similar. Basically I am using a render texture target to generate a mask on a surface using the players camera frustum and checking if the pixels in the object’s local space are inside the frustum of the camera (also coverted to local space). I can find the world camera->pixel vector for each point in the texture and I want to compare the depth of that vector against a depth texture ALSO rendered from the players POV to toss out pixels that are occluded.

Basically imagine wanting to capture a camera flash into a texture and map it to the surface. I am stalled trying to get the world space (cameraPosition->wallPixel) vector into clip space for the scene depth capture so I can do a comparison.

Am I going about this the wrong way? The trick is that the render texture target obviously has no knowlege of the object or the camera in question, so I am having to find ways to provide those values. I can get close, but can’t figure out that last transform to clip space (also my matrix math understanding is shaky at best).

@rujuro

Unfortunately no, I needed to push it back and work on other things.
However I recently stumbled upon this answer: How do you make a scene capture 2d orthographic - Rendering - Epic Developer Community Forums

It looks like thomie_guns found out the correct construction of the view rotation matrix, maybe it will help you. If I find time I’ll take a closer look.

Do you guys just want to find out how to project a custom per object shadow then?

If so I have set that up in the past and here is an example using a swimming pool mesh and an ortho scene capture to capture depths and generate a custom shadow. Nice thing about this is you can blur the texture to make it nice an soft.

Oh wait, is this from the render to texture level using the normalized depth map Artists Tools and Workflows for Rendering in Unreal Engine | Unreal Engine 5.2 Documentation?

I guess this is what I wanted to do, but in realtime and the ability to cast those shadows on single independent meshes. Would it be possible with some modifications?

Yes, I redid that since when I went to use it recently I found that the lighting transform was incorrect and only worked for some cardinal axis directions and had some unnecessary parts.

It works in realtime but it will not be super fast since it uses a scene capture. But the blueprint above provides a framework to sync everything up based on the bounds of the desired object. I am about to run to a meeting but I can post all the info later tonight.

edit. I have started writing this all up and it should be done later but perhaps tomorrow.

Hey guys,
I thought this was worthy of a longer post since there are a few steps involved.

http://shaderbits.com/blog/custom-per-object-shadowmaps-using-blueprints

The two most relevant bits are these images.

Blueprint setup of a light transform that has a Z up orientation:

Transform of worldposition into the created light transform space. First the depth is created from worldpositions that space, then the depth texture is sampled from that projection mapped to the XY directions and comparison is taken:

Thanks for an interesting blog post, @
I would never notice the tree shadow not moving, if i wasn’t told.

Hey Ryan, I read your article and I’m still struggling a bit. I’m also trying to map a pixel’s world location to its UV location in a render target.

In my post process material, I have (passed from BP via MPC):

  • SceneCapture’s forward vector
  • A vector from the SceneCapture to the currently rendered pixel (simply subtracting SceneCapture position from Absolute World Position)

Which should be enough. I thought if I get the horizontal angle and the vertical angle between those vectors, I could easily map them to the render target/depth map’s UV. I’ve had partial success:

  • With horizontal, I just projected the vectors to XY plane (Z=0), but AngleBetweenVectors only gives positive values (I don’t know if the pixel is to the left or to the right of the center)
  • With vertical, I can’t really project it to any plane, so I just checked the Z values and with ArcsineFast I think I got the correct angles

So now if I know that my SceneCapture has a FOV of let’s say 40 degrees, I can check if my horizontal and vertical angles are within that range (-20…20 deg) and map it to the 0-1 UV space. Still, I’m having trouble with the horizontal angle, and also now I see you suggest using orthographic projection instead of perspective for the SceneCapture, and also you’re using matrices rather than trigonometry.

Could you perhaps point me into the right direction? I’ll still try to modify your example for my purpose, but I’ve already wasted two days on this, so any help would be greatly appreciated. Thank you for your time :slight_smile: