I’d like to create fullscreen-compositing-style effects in VR. For instance, crossfading between two VR cameras, or having one VR camera render on top of another with additive or alpha blending. I know that, for flatscreen, the standard way to do this would be to create a SceneCaptureComponent2D capturing to a RenderTexture, and then apply that RenderTexture to the screen in a postprocess, but for VR I’d probably have to have two SceneCaptureComponent2Ds rendering to separate RenderTextures for each eye, and I’d probably lose motion vectors for async reprojection. Is there any way to literally take two VR cameras and composite them AFTER all the normal postprocessing (and reprojection)? Could it be done if the render pipeline were ripped apart? Or is the double-rendertexture approach my best bet?
Many thanks!