Is it possible to use multiple/stereoscopic real-life cameras, with a green screen, to bring essentially a 3d model into UE and render the model in 3D?
Clearly it is possible to shoot a person in front of a green screen with a moving camera and, as long as you have the tracking data for the camera, to render the scene in UE with a virtual camera travelling the same path.
But is it possible to have static or moving multiple or stereoscopic cameras and UE to build a 3D model of the subject based on the multiple camera differences? There are various apps that do something similar with depth-capable iPhones for example.
If you had a 3D profile of a subject shot to chroma key you could then apply different camera panning in post, and you could apply correct lighting.
?