If I use ReconstructWorldPositionAndCameraDirectionFromDeviceZ
to get world position of a pixel on one hand and on the other hand cast a ray from CreatePrimaryRay(UV)
and then compute Ray.Origin + Payload.HitT * Ray.Direction
there is about 0.5% difference between these world positions. It is stably periodic from origin at least in x and y plus has some moire pattern on top that shifts when the camera is rotated. Looks like numerical precision based difference to me. Do you know which one is more accurate?
Similarly, if I compare the ConvertToDeviceZ(HitT)
with SceneDepthTexture.Load(int3(pixelPos, 0)).r
there is a difference that is very small at the center of the screen but becomes more noticeable towards the corners. This one looks smooth with no patterns. Again, which one is more accurate?
That assumes there is no tessellated materials, with tesselation there is a natural difference since rays work on untessellated meshes…