In case anyone is wondering, with the latest ‘Spectator Screen’ changes it should now be possible to do proper mixed reality capture without the need for a custom plugin. I’m still playing with it and I want to write up something once I test it out, but here’s how I’m doing it so far:
- Have 2 SceneCapture2D Actors in the scene. Each one renders to a different Render Target. One will be used for capturing just foreground (what’s between the headset and the SceneCapture2D), and the other one will capture the whole scene. You can use something like the green screen shader that was mentioned in this thread for the foreground capture object. And you can check out this thread to figure out how to set the transform of the SceneCapture2D Actors to be the same as the 3rd Vive controller transform. (Since Oculus also announced support for additional tracked controllers, it should also be possible to use the transform coming from a 3rd Touch).
- Set your spectator mode to ‘Texture Plus Eye’, and have the eye take up one quadrant, while the texture takes up the whole view. Have the eye draw over the texture.
- Create a PostProcess Material that combines two Render Targets into one image. I can post my material nodes once I’m done testing the whole thing, but basically set the UTiling and VTiling of your TexCoord[0] to 2, then sample the two Texture Samples and combine them so each one takes one half of the resulting image.
- Draw this material to another render target on Tick and set that to be the Spectator Mode Texture.
- Either capture the individual quadrants in OBS and composite them so that the real camera is sandwiched between your foreground and background, or capture the whole window, then do your editing and compositing in another software like After Effects or Premiere. I’d prefer the latter if you don’t have to stream your mixed reality footage, since you get a lot more control over the final result.
That should be it! I don’t know if this is the most optimized way to do it, but I just did a simple test on my Oculus where I had two screen captures in my scene capturing every frame, and it was comfortably running at 90fps. Whereas with the plugin that was mentioned in the thread over the past couple of months, I had lower res render targets with scene captures that weren’t even running every frame, and most of the time I was at 45 FPS on Vive.
I’ll try to test it out and write up something within the next few weeks if that’d be helpful. Here’s an example screenshot below that I got using the steps I mentioned.
