Mixed Reality Status update

I’ve had a simple solution for mixed reality going for a while based on the hack I submitted above and some obs xsplit related tom foolery to sync the video feeds. My solution allows for real-time compositing on one machine but because of limitations in the post process system I can’t replicate what the unity guys are doing without major engine changes. The render to texture, post process and window control areas of the engine seem to be disappointingly under developed. This is a list of things that are holding me back from equivalent results.

  1. No alpha support for post process blendables. I know the plan is to replace the whole post process system at some point but having this would make support for render to textures much more acceptable.
  2. Capture multiple render targets using different post process settings from a single USceneCaptureComponent. When the post process is run there is a huge amount of data about the frame available but you can only output one texture as a result. To handle the output the unity guys are getting efficiently we need at least two. There seems to be a solution used in FCompositionGraphCaptureProtocol for this functionality but the USceneCaptureComponent code isn’t using it.

My plan at this stage is to build a new USceneCaptureComponent based on the code in FCompositionGraphCaptureProtocol as a plugin. I’d love to know more about what epic have planned though, I hate reinventing the wheel.