Q about camera for green-screen mixed reality recording


I’m doing a 2 day hackathon (so fast replies super appreciated) and we have a green-screen set up ready to use.

We’re wondering if there’s a way to set up the tracked third controller to be the camera that goes to the desktop monitor, or otherwise do the mixed-reality camera as cheaply (frame-wise) as possible. (The same as the stuff seen in the Virtual Reality videofrom Valve.)

Does anyone have a walkthrough or starter tips on how to begin with this?

Thank you all so much for ANY help! :smiley:

I think the idea is that you put a motion controller onto a real camera and walk it around. Then, you would probably want to put an actor with a camera component at that motion controllers position and feed the camera output to some rendering service which handles the green screen stuff. The primary camera will be the game camera, but the physical camera will place the player into the game world and the software rendering service will key against the green background. I’ve never done any of this stuff, so there’s a lot of hand waving going on here, but that’s kinda what hackathons are good for :slight_smile: I’m curious to know how things work out, so let us know what you end up with.

The way they do it in unity is they attach a camera to a third controller. This controller is attached to a real camera. Both the virtual camera and the real camera record. The virtual camera record two separate layers (Foreground objects and Background objects). How I read they do it is they parent a plane along the vertical access to the Vive headset. Then they render whatever is in front of the headset to one layer and then render what ever is behind the headset(plane) on another layer and I assume record those seperately. Hope this helps.

We’re on the same page here, OP. Slayemin is on the right track, but I think that approach is contingent upon being able to export the 2nd virtual camera view (which is in line with the real camera) separately from the player’s headset. Is it possible to render video from a camera while a 2nd camera is displayed in a headset?

Lasyavez brings up a good point about the difference between foreground and background rendering. I guess we’ll cross that bridge when we come to it - let’s figure out the basics first :slight_smile: