So after I finished my VR project I wanted to create a 360 video of it to display to others without a Oculus Rift/ HTC Vive headset. Using the Stereo Panoramic Movie Capture plugin by Kite & Lightning I’ve got together a great PNG sequence for left and right eye that I’ve put through After Effects and everything works perfectly. However, audio has become a major problem. I have exported the audio as a stereo .wav by recording internally, but after much research I’ve been discovering ambisonic audio and all it’s complexities. I’ve attempted to recreate the audio from the VR experience but it’s impossible to make the sound source dynamic and moveable, rendering the method useless. What I’m looking for is a method of getting the audio from the engine straight into a 4 channel ambisonic track. If anyone else has faced this issue please share how you overcame it. Thanks!
Unfortunately not, I ended up just using the stereo track. My best guess would be that you would require a piece of hardware capable of recording 4 tracks (i.e. mixing desk) and internally record the audio using software such as Adobe Audition onto a AmbiX track. Then use metadata injection to track the ambiX ANC/SN3D file to the video
There is still no direct way to do this but I have come up with rather simple way of capturing audio by running two sequencer passes either through the Render Media output twice with Audio (not the best way since you have render out video first) or creating a blueprint/audio record of the sequence twice.
You want to make sure your PC is set up for 5.1 channel output even if you don’t have a center, LRE, or rear speakers. This is necessary for the audio record feature to create a 5.1 (6) audio channel WAV file.
Make sure you add -audiomix to your editor.exe or shortcut to your .proj file.
Set your frame timing to ‘fixed frame’ in the Project Settings before capturing video or audio.
Capture Audio either through the Render Media, Movie Render Queue, or Blueprint that starts the sequencer and then Starts Recording Output/Finish Recording Output. This will capture Front Left/Right & Back Left/Right at 30 degrees off center.
Rotate 90 degrees on the X (roll) axis and repeat the above. This will capture the Front Up/Down and Back Up/Down.
If you have a need to do a voice-over or require a non-rotating audio source, this can be done by creating a new Audio Class and setting it to capture ‘Center Channel Only’, then assign this class to your audio source.
Now bring these into your DAW or NLE and break the channels up. The order of the 5.1 channels are Front Left, Front Right, Center, LRE, Back Left, Back Right. Use your 1st or 2nd order Ambisonic encoder to place your channels at -30, 30, -150, 150 azimuth, -30, 30, -150, 150 elevation. The naming maybe different but azimuth is left/right and elevation is up/down. Take your center channel and assign it to only to the first output channel which is ‘W’ or the OMNI channel. This audio will sound as though it is coming within your head, so make sure this is what you want. The LRE channel in both sequences can be deleted.
Note: Since timing is based on frames, and audio syncing is much more granular. There maybe some timing shift that needs to be corrected in your DAW or NLE. Do this at the finest increment possible since even phase delays between the two sequences could cause problems; especially in voice and other mono sources.
I plan to do a tutorial on YouTube on my channel ‘All Things 3D’ soon as well as creating a plugin once find out where the new Ambisonic encoding is done in the 4.26