Hi!
Just want to share my idea of very easy way to convert your sequences to 3D stereoscopic videos for VR \ Apple Vision \ 3Dtvs.
All it takes for video to become 3D - is to render the sequence twice, with every camera shifted to -3.15cm to the left in the first pass, & +3.15cm to the right in the second (Y axle, “green”).
But, instead of creating blueprint camera rig with embedded cameras, setting camera parameters inside blueprints etc., we can simply render our existing sequences, without ANY modifications to camera tracks or animations.
All that we need - is to shift the CAMERA_COMPONENT inside every camera used in the sequence. This component is always 0.0.0, but who said is HAS to be? So here’s the full process step by step:
- We duplicate our sequence and call it “something_L”
- In every camera that is used, we shift CAMERA_COMPONENT (not the camera itself) by -3.15cm (using outliner, not inside sequence editor).
- Render to the folder called L.
And that’s it. For the right eye, we simply repeat these steps but shift CAMERA_COMPONENT to +3.15cm and render again (to the new folder called R, of course). Using this method, one can convert his existing animations to 3D with practically no effort at all. Go 3D! After the render, you can use DaVinci Resolve to sync & convert your videos into 3D stereoscopic format; I recommend Full SBS (ideal for VR & natively supported by YouTube). There are tutorials online on how to inject metadata & PAR parameters inside MKV container for YouTube.
Attention: for this to work with camera shake (random movements are not repeated in second render) you need to bake out camera shake with the Bake Transform tool in Sequencer. Just select your camera, choose “Bake Transform tool” (select “All frames”), and there will appear new transform track with camera shake build-in. After that DELETE old camera shake track and disable old transform track. Naturally, you should do this only to a duplicate copy of your sequence, and after all edits (no more editing camera movement after that).
Also, if there are random movements like trees bending in the wind (randomly animated) or Niagara smoke clouds, or randomly generated debris floating in the water - for this to work, you need to duplicate your camera inside the sequence, shift these duplicates by ±3.15cm, and “render all cameras” simultaneously using corresponding parameter in level sequence renderer. The key here is to force both left & right cameras to capture the same world state; it is very demanding to hardware, and for 4K \ 8K renders, I’m afraid, would require RTX4090 or something like that.
Here’s an example 8K 3D stereoscopic video
WBR - Draco.