Reconstituting world-space depth differential information from directional cascade shadow maps.

Hello,

I’m working on a bit of code that does an approximated deep shadow lookup using world-space distance information. I’ve got this all working fine for all other types of shadows, but am having trouble finding the right parameters to reconstruct this information for directionals.

As an example of what I’m looking for, assuming I have 2 planes next to each other in the shadow map, spaced 300 cm apart from one another. In the shadow map, their depth will look something like 0.32148 and 0.32950 (for example). I need the parameters necessary to take the delta (0.00802), and get back 300.

I’ve tried looking at the InvDeviceZToWorldZTransforms (which are supposed to work for perspective and orthographic projections), as well as using the DepthBias and InvMaxSubjectDepth information, but neither really give me what I’m expecting. Similarly, I had a look at most of the matrices available in the FProjectedShadowInfo data set, but can’t seem to find anything that would fit the bill.

Does anyone know what the proper way of calculating a world-space distance from the directional cascade map is?

Any advice is greatly appreciated.

Thanks,

J

The function you are looking for is ConvertFromDeviceZ

I’ve tried ConvertFromDeviceZ, which relies on InvDeviceToWorldZTransform, which is the problem. The infomation in that FVector is not the correct one for ConvertFromDeviceZ to return the correct value. I should mention this is being done in an independent pass, and I have to manually bind this information to the shader.

So, I think I may have found a way to reconstitute this information, but I’m unsure as to how accurate it is (seems to be in the right ballpark for what I’m doing).

When looking at the forward lighting directional shadow reads, I noticed that they multiply with a fixed scale of 4000 to what is calculated for a shadow depth, and also what is read in from shadow map.

I think this value is wrong and probably some old temp code or something. However, I did find that if I substituted that scale for one created from (MaxSubjectZ - MinSubectZ)/InvMaxSubjectDepth, the values I get from diffing my results are very near what they should be in world camera space.

Posting it here for completion, and if anyone knows if this is correct or not, please let me know. But seems to do the trick.