Creating a localized Pixilation Post Processing Effect

Hello,
I’m really interested in trying to use a Pixilation PP shader in a ‘localized’ manner.
Meaning, would it be possible to make it so that a specific object is rendered in a pixilated state, rather than the whole screen?

Originally saw this video which spawned the question.
The final result from that:

Would this be done via post processing? Via materials? Could you make a material inform a post process shade of ‘hey render me pixilated’?
Can I use one of the parameters of the ‘SceneTexture’ node or an alternative to give it the information of what object should be rendered as such?

Just an idea I had that may be interesting, if anyone knows how to go about this or any suggestions?

After a bit more digging I learned of Custom Depth, and that I could use that to Mask/Filter out objects on screen for precisely such things.

So with some playing around I got it to work like this:


What it looks like:


But that brought up another question for me.
Although I can control the strength of the ‘pixilation’ effect via the material instance.
This affects all objects using Custom Depth that is within this post processing volume.

So is there a way to individualize this? For example filter between different values of the custom depth?
Since this is apparently something that can be set here:
image

I could theoretically expose that as a parameter for the material being used, then add multiple instances to the Volumes Material array, but that would mean I would manually need to assign:

  1. a new custom depth value to each object I want to have individual strength over
    and
  2. set up a new material instance to then have said value as its parameter input.

That sounds like a really inefficient way to do things, in both performance and plain working with it.
Any way to approach this?
Could I perhaps drive this via a parameter inside of the object that I want rendered at a different strength and expose that to the material?

Another question would be, how do I keep the pixilation strength consistent independently of how far away the camera is from the target object?

Because currently, when moving towards/away from an object the amount of pixels that are given to it change, I assume because they are simply bigger/smaller on screen and therefore actually end up taking more or less screen space and therefore pixels.


Additionally, in case anyone is interested, I noticed this strange ‘outline’ that was being created from the masking of the pixilation effect.
Turns out there is a simple way to get rid of that, in the material, simply change ‘Blendable Location’ to Before and its fixed.
image

After Tonemapping:


Before Tonemapping:

For this I wanted to use CameraPosition/ActorPosition to essentially multiply the ‘Strength’ parameter based on distance.
But probably clamping it to a Min/MaxStrength.

Essentially decreasing the strength as the camera moves away from the object, and vice versa.

But the problem is because this is a ‘Material Domain - Post Process’ material, it is unable to use Actor Position. I assume this is because post processing materials run on the GPU and not CPU?
In any case, I cant really think of any way to get the distance from the Camera to the Objects to calculate this.
Does someone know of a way to do this?

Well the only way I figured out to maintain the ‘consistency’ look of the pixilated style is via a ‘LOD Like’ approach of changing the strength of the pixilation programmatically, breaking it down into 3 steps of strength via distance.

The only issues I noticed with this approach is when transitioning this min/max distance ‘threshold’ there is a sharp cut-off line between the two:

But that is primarily visible during lower pixilation strengths (higher pixel count).
Otherwise very hard to notice.

One way I would like to find out how to do is, if its possible, calculate the amount of space an object takes up on screen. Then use that to calculate how much of the current resolution of pixels it should multiply/divide by.
To completely circumvent having to manually assign weights for different ranges.
Instead it could be boiled down to a single multiplier value for scaling based on distance.

But not sure if how one would do that yet, considering this is in Post Processing space.