defer to showing a pixel than not.

my situation is this; i have an unlit, procedural material that i has issues with shimmering at long distance when being off axis. essentially, if the renderer has to make a decision to show the pixel or not, i want it to generally defer to showing the pixel (ie, 30% show 70% not show, id still want it to show the pixel). i can sorta do this with AA and render scale, but it costs performance, and doesn’t fully solve the problem; stuff still looks shimmery. heres an example of what it looks like now, and heres an example of what ideally want it to look like.

100% render scale and no MSAA

200% render scale 4x MSAA (no difference to visual quality above 4x MSAA)

Doing a 70/30 “random split” would just make it noisier, because getting temporal coherency when moving is super hard.

This is exactly the problem that anti-aliasing solves. Crank it up to 8x, it’ll look good. Keep it lower, it’ll look less good.

If that’s not working for you, then you can make something like a 1024x256 texture that is white along the top and left edges by 16 pixels or so, and black everywhere else, and tile it across 10x across your geometry, generate MIP maps, and turn up the anisotropy to 16x.

it wouldn’t be random, rather it would always always defer to showing the lines rather than not ideally
this is a fully procedural material; there are no textures so i unfortunately cant just bruteforce increase the resolution. i also cant increase the AA quality beyond about 4x, as theres no visible material quality difference; MSAA mainly affects edges, and does basically nothing to this material. infact, the main driver of quality was the resolution scaling rather than any AA. temporal AA does work some what, but it looks weird, so i avoid it. FXAA also looks mediocre at best, so i avoid it as well.

essentially, if there was some way for me to manually feed what i want the render pipeline to always show, that would be ideal. a post process maybe? idk.

I mean, you can project the current pixel to screen space. That’s totally possible.
But I don’t think you’d actually get a good grid that way.

What you could do, is project the currently rendered pixel to grid space, using screen space derivatives, and calculate how large the patch is. Then you calculate the overlap percentage between the patch and the grid, and you output a brightness that’s directly relative to the amount of coverage.

screen space derivatives?

DDX and DDY nodes in the material graph. Tell you how “big” the pixel is in mesh space.

Wrap this material on a sphere and you can see it get redder/greener the further away it is in the screen (so, the more of the object is covered by a specific pixel.)

More info:

i wasn’t quite able to get it working using your method, but i was able to get it working using world space + camera world position + a distance node. your method did help tho in giving me this direction to work, so thank you. im also using the camera vector node to make sure the math only applies when viewing the mesh off axis.

heres the before and after. it does get fuzzy and kinda messy looking at a distance, but its better than before. temporal works with it better as well

1 Like