Inserting compute shaders between VR render and display

I am looking into simulating a passthrough VR experience using Unreal 5. In other words, pretending the image on the VR display comes from a “real world” camera on the front of the headset. I have worked a bit with the VR Template and understand the basics of how it works. I have two main questions:

  1. I would like to render the HMD left and right images at a position that is NOT at the VR pawn camera root location (i.e., a few centimeters in front of the users’ eyes). I understand this would cause user disorientation; I am deliberately intending to emulate that.

  2. I would like to intercept the rendered images/depth buffers BEFORE they are sent to the device, and insert some of my own compute shaders. Ideally OpenCL as I already have kernels implemented. I’m a bit lost where to get started and what the best approach would be. Is this even feasible? Where in the engine code does the eye rendering actually take place?

I am a beginner/intermediate level Unreal 5 user coming from an engineering background (not game dev). I understand the basics of Blueprint and am a skilled C++ programmer, not afraid to get down into the weeds, just need some tips on where to start.