Hi, I have a ScreenCapture2D with a render texture and a post-processing shader to generate a polar projection based on the distance to camera of each pixel. Then, I transform that projection back into a cartesian one, effectively simulating a sonar image.
The problem is each step requires a different image size. If I set the correct output size to the render texture, the SceneTexture will also be reduced, instead of capturing the whole scene properly.
Is there a way to set up this maybe separating the 3 steps and using intermediate buffer render textures of the appropiate size or something similar?