Hi @ ,
I have done a lot scouring for information about encoding Ambisonic from Unreal and have come up with nothing other than these post a few others asking the question with no response. As far as it being important to Unreal’s future, I would think with the efforts Unreal is making to make it a tool for video and filmmaking it would be great if there was much better ability in this area. For example, it would be great to be able to have simultaneous listener positions that you can import into a submix that allows you apply mathematical functions like standard math functions, trig and even polar to rectangular and visa versa than output these channels into a finished Ambisonic B 1st order or 2nd order (4 or 8 channel) WAV files for use in another DAW.
I have been experimenting with three different listener positions based upon the camera position and rotation of 0:0:0 degrees for Left/Right, and the 90:0:0 for Front/Back, and 0:90:0 for Up/Down and those three listener rotations to three WAV files in the sequencer. I then sum these three stereo channels and divide by three to create the OMNI channel. I then invert each right channel and sum it with its left channel to create the three figure-eight type outputs for Right/Left, Front/Back, and Up/Down creating the four basic AmbiX 1st order 4-channel outputs. Currently I am using Adobe Premiere, but pretty much any DLE or DAW with at least four channel output can be used.
Why is this important? As many here who are using Unreal to create VR content know that Unreal, especially in 4.26 now offering native spatial audio, have experienced how realistic it sounds inside the VR experience, but lose that experience from an audio perspective in exporting it out using the sequencer. However, in saying that, Unreal has not really done much to push the 360 export either, but luckily others have come up with solutions using the cubemap capture tool and adding a few of my own tricks, and source modification in custom 4.26 build to allow for 16K x 8K 360 video in 12bit HDR with RTX enabled. But sadly audio does not live up to the great video I can export and have to create the spatial sound field in Adobe Premiere with help from a few Ambisonic VSTs. However, I am hoping you can point me in the right direction now that I have been able to capture one listener point .WAV and just repeating the process two more times, if it can be done in one iteration, that would be great! In fact I don’t mind if the process is done like it is now in the sequencer to create a single stereo WAV, but make that four independent WAV files or even better, a grouped four channel WAV with AmbiX metatag.
Here is a link to a test video I created for VR experience titled “Hot Cocoa in VR” using the above technique: https://youtu.be/JtG6Gpiprkg
PS I do believe even if you are not watching these videos in a VR headset and just on your phone, having the video AND audio rotate as you move your phone is far better experience. And frankly not all experiences created with Unreal require user input, or 6DOF as I have found in creating relaxation / calming experiences for my wife’s psychotherapy practice. The one above was sent out during the Christmas holidays and I entered it into the MegaJam but sadly ran into to an upload snag that caused me to miss the deadline by 10 minutes. I was never interested in the prizes, so I am thankful that they still listed in the MegaJam entries for 2020. I any case, I am in no way criticizing the work you guys have made, and understand you have to move forward where most impact will be made with a new feature. In fact those who say they will just move over to Unity don’t realize how many features are created to just die on the vine or are broken in future updates. Unity has it frustrations too. I personally just like the look of the shaders in Unreal, and frankly out of the box, far more control without adding a new feature that may or not work in VR.
