I sure hope it does because that is what I need. There is OpenCV but this is limited I could try to use machine learning to analyze geometry but I don’t know how to do that.
Can you render pixel-depth to a render-target or some other output?
In what context do you need this information? In a still image, a moving image, etc?
I would like this as off screen recording from a camera in my game engine and if I can’t get a color image with the same camera I could impose 2 cameras on top of each other right?
In this context, unsure. I’d have to think and will post if I come up with an answer.
Get anything from a camera offscreen would be doable, and I do not see why it wouldn’t be in color if you wanted it that way. The depth is the thing… Do you need pixel-depth per pixel, just the depth of the camera?
If it’s more ‘distance’ you need to measure, what about a line-trace from the other camera to whatever?
I know the custom-depth buffer can be written to but I am unsure what is written; I’m just not experienced with it much. Unsure if it writes to a layer, like 1, or it writes the actual depth; you might want to play w/it to test.
EDIT: are these the droids you are looking for? I believe what you want is a depth-map?