Hey y’all,
I have a post process material, and I want to access the world position of neighboring pixels. For context, I’m doing edge detection to draw lines along the edges of objects using normals and depth, and I’m hoping to use the world position to help determine concavity information (not sure if it’ll work out, but that’s the goal, anyway). However, my math is pretty rusty here, and I’m not getting the results I expect.
I’ve seen other posts showing the math for perspective projection, but I’m using an orthographic camera, which should be more straightforward, right? What am I missing here?
This is the result of rendering absolute world position:
Here is one of my first attempts, which uses the camera vector (similar to this perspective projection solution: Depth to World Position - #9 by Wontague). I’m a little unsure how I could lookup neighbors with this one, but either way, I’m confused why it doesn’t work for ScreenPosition.
Here is another attempt that tries to first convert from screen to view space, then to world space. Weirdly enough I get the same result. Actually, I’ve gone through several other iterations, and all the ones I’ve felt should work give me results similar to this. So I may be wrong, but at least I’m consistent?
Anyone know what I’m missing here? Or how to properly calculate world position from depth for neighboring pixels when using orthographic projection?