Inverse Projection Matrix - Custom HLSL Node

Thanks for sharing that link. I’ve tested mine in a couple depth-aware algorithms too and it seems to work how I expect. I wonder if we’re just using different entry points in the API to achieve a similar result. For example we’re both building Normalized Device Coordinates based on the pixel’s screen position and depth. At which point we both translate from clip space to world space, but use different entry points.

As far as I can tell from the digging I’ve done, View.ScreenToTranslatedWorld and ResolvedView.ClipToTranslatedWorld are functionally equivalent. My assumption is there isn’t a singular entry point because of how they’re exposing HLSL to the Material Editor, but I’m just speculating.

And I think my last two lines are doing the same as your DFFastSubtract() line too. But I can’t find any documentation on it. I’m not sure about the .High component either, high-precision maybe? I wonder if your implementation using this function is safer or more precise for Unreal’s Large World Coordinates. I don’t know.