I’m part of a team working on an immersive installation using Unreal 4.19. The visuals will be projected onto walls in a gallery space, and we are hoping to use a surround speaker setup. We are trying to figure out a way to get the audio from Unreal to output in an Ambisonic format, so that we can decode the speaker signals later. Is there a way to output anything other than stereo/headphone audio?
Currently we support importing FO Ambisonics files and via Resonance or Oculus plugins, we also support decoding Ambisonics to binaural.
We do not currently support arbitrary speaker decoding or native encoding of spatial sound sources to Ambisonics–however, this is on our radar and we’re interested in eventually getting to that at some point.
For traditional surround, we support up to conventional 7.1 with the New Unreal Audio Engine; so if you’re just doing a regular (planar) surround speaker setup, then this will yield better results than an FOA.
I’m hoping to use Unreal Engine to create 360 videos to upload to YouTube, and was wondering if there had been any further developments with the potential ambisonics export feature?
New in 4.25 are Submix Endpoints, including Soundfield/Ambisonics Endpoints. Submix Endpoints can be thought of as exit points which connect to a device. I am not sure what the state of outputting audio via these endpoints because I haven’t personally tested it yet, but it could be an interesting opportunity to render out Ambisonics format audio from the Engine.
I know this is an old post but did you guys ever manage to make any guide about this? I believe Soundfield Submixes and Endpoints would help me with a very specific challenge (outputting ambisonics to Reaper and then decoding into a custom Speaker setup using IEM plugins), but I can’t find any guide or helpful documentation that explains how to actually do stuff with them.
Thanks in advance.