Post process stencil mask inheriting masked objects shape/displacement.

I’ve created a simple (for now) grid material that’s being applied before tonemapping using a post processing custom stencil mask. I believe this is the UE 5.3 preferred method of rendering certain items above others in world space.

The main issue I’m having is that the shape of any objects which are being occluded by the mask is being applied to the material itself. The first image shows how the grid lines deform around the sphere which is being masked by a plane that intersects said sphere.

Secondly, I’d like for the grid to be based on the local position of the mesh it’s applied to, but use the world scale instead. I’ve tried using ActorPosition(Absolute) to adjust, but that’s not allowed with Postprocess materials.

The second image is the material graph in use, the 3rd image is the material function used to draw the grid lines themselves.

The world position / depth of the underlying plane is obscured by the other geometry and cannot be sampled from that view. You can use the virtual plane coordinates node in the material to generate a coordinate system to apply the grid to that ignores all real geometry.

I tried with the VirtualPlaneCoordinates and ObjectAlignedVirtualPlaneCoordinates nodes, and while both do address the ‘warping’ issue present in my first image, both of their results are either view aligned or not functioning correctly.

I wonder if I’m going about this wrong with my implementation. My intention is to be able to apply dynamic textures to objects locally aligned and scaled, while simultaneously forcing those objects to render above or below certain other objects regardless of world position.

You can’t force opaque objects to draw in orders other than their depth sorted order. The pixels behind the front most objects literally do not exist. Translucent meshes do not draw to the depth buffer, so the opaque pixels behind them are drawn. But this is critical for efficient rendering, otherwise every pixel would need to be drawn numerous times, leading to significant overdraw.
You can kind of fake it in specific cases like I described but that’s about it.

Those functions do work. And they are not intrinsically view aligned. It depends on the vectors you supply them.

I’m still in the process of transferring my knowledge from Unity to Unreal so thank you for your patience. I wholly agree that ‘not working’ is more of a lack of implementation knowledge on my part than a non-functining node.

Understanding that the two engines are engineered differently, and that there’s an overdraw hit. Is it really just not possible to override the depth sort order for an object? I thought that was one of the uses of writing the object to the CustomDepth buffer as a solid color and using a stencil check to mask it back in?

I have a workaround that would render the view/scene from a secondary camera that’s set to only see certain tagged objects, then overlay that as a masked 2D rendered texture, but aren’t sure how to implement that in UE yet. (Or how much perf that would consume in UE)

I’ve been reading through this thread which indicates that it used to be possible back in the UE3 days under forward rendering. (My current project does indeed use fwd rendering due to it being VR)

I used to achieve this type of effect when I worked in Unity by setting the materials custom render queue property to a value above that which was assigned to the transparency pass.

From a technical perspective it is generally possible with forward renderers. But since Unreal has been moving towards focusing on Deferred, the FR is not in the most useful state.

The custom depth buffer is a second, selective buffer that exists in addition to the main z pass. But because it’s only depth, none of the other pixel shading runs.
So while it could be used to project a single object in front, because no other pixel shading is happening you would need to depth project textures. Because there is only one extra depth pass, if multiple objects are in the custom depth pass, they would obscure each other just like they would on the normal depth pass.

There is also the pixel depth offset, which can push pixels backwards in the depth buffer, but not forward. Still, only the object in front will have its fragments shaded.

I’m guessing it would require editing the engine source code to re-add the ability to explicitly change the sorting of opaque meshes.

Can you tell me more about the visual effect you want to achieve?

We’re developing VR content, which as I understand it currently necessitates using forward rendering in UE to maintain performance goals (90fps).

The desired effect is three part:
Part one wraps the users immediate play-space inside an unlit near-black ~10m diameter cylinder. This cylinder is intended to occlude all assets/environmentals/postprocessing which aren’t tagged for the space, even those which are within the 10m diameter. (Creating a dark purgatory/niflheim feel) As this material has only color, and no texture detail to it, the effect works well.

Part two uses a dynamic non-texture material to draw out detail with variable falloff on a plane positioned somewhere between the feet and chest of the user. This plane serves to anchor the user in the void and restore some stability. (Users didn’t like it when they were entirely alone with the void) The grid is essentially a placeholder/mapping space for the final projection design.

Part three is that a small collection of assets are allowed to render through the void They can range from diegetic UI elements to other entities within the greater environment, but should draw above/below the space and the anchoring plane as needed.

My first thought was to just move the user to this void as a separate space, but that has it’s own limitations when it comes to transitioning visually between the spaces. Additionally, it would conflict with being able to mask-in key environmental items without also relocating them. Interestingly enough the undesirable behavior in the first image does have a potential use case as an animated echolocation type effect that can temporarily ‘trace’ the real-world back into the void.

Custom depth may work for that case. But why not just toggle rendering / visibility of the objects altogether. One way is the bool for in game visibility. Another way is in the material, where you can use the masked material mode to turn objects in invisible. You can use a sphere-mask in the shader to make sure the ground remains visible under the players feet.

I shied away from toggling visibility of scene assets as that would require iterating through each loaded object’s values and I’m not fond of how that scales as scenes grow more detailed and complex. Not to mention that would disallow transition effects on opaque assets when entering and exiting the void as these items would pop in/out.

With regards to the masked material mode, do you mean adding a lerp to the assets material that transitions from it’s material color(s) to the void color(s), visually hiding it. The value for the lerp being read from the stencil buffer to crossfade without the need for all materials to have alpha transparency?

I’ve been working through using bit testing and the Custom Depth Stencil Write Mask values to separate key items into layers which is allowing me an amount of z-depth sorting (in the masks at least). However, it suffers from the lack of pixel data due to previous pass occlusion.

Correct on the masking. You can dither if you need a smooth transition, but since it’s all in shader you don’t have to pay any CPU cost for hiding them (aside from changing a single global float) like you would with a toggle. This can be driven by a global material parameter collection.
And unlike custom depth, you’ll get full shading on any object still drawn.