Is there a maximum number of channels for a SoundfieldEndpointSubmix

Hi,
I would like to implement a custom SoundfieldEndpoint. When I write my own SoundFieldFormat, is there a maximum number of channels that I will be able to output to the Audio-Interface? (Like 8 (7.1) for a regular AudioEndpoint)?

I loved to know more about your custom SoundfieldEndpoint. Have you made any progress? I have been looking a the sourcecode and UnrealAudioTypes.h shows these values:

namespace ESpeaker
{
/** Values that represent speaker types. */
enum Type
{
FRONT_LEFT,
FRONT_RIGHT,
FRONT_CENTER,
LOW_FREQUENCY,
BACK_LEFT,
BACK_RIGHT,
FRONT_LEFT_OF_CENTER,
FRONT_RIGHT_OF_CENTER,
BACK_CENTER,
SIDE_LEFT,
SIDE_RIGHT,
TOP_CENTER,
TOP_FRONT_LEFT,
TOP_FRONT_CENTER,
TOP_FRONT_RIGHT,
TOP_BACK_LEFT,
TOP_BACK_CENTER,
TOP_BACK_RIGHT,
UNUSED,
SPEAKER_TYPE_COUNT
};

I would love to be able to push 17 channel WAV file or even separate WAV files for each channel into a 2nd or 3rd order Ambisonic encoder. Currently I obtain nine channels by running to sequencer passes with the camera/listener rotated 90 degree on its X (roll) axis. This gives me four channels of Up/Down when using a 5.1 format. Of course I could have enabled 7.1 and obtain four more direct channels to help improve the dead spots, but currently my goal was to bring spatial audio from Unreal to Adobe Premiere that only support 5.1 (6) channels. I plan to switch to Reaper and DaVinci Resolve Studio soon. The problem in doing it this way is the slight timing deviation between passes that I have realign the two WAV files (with 6 audio tracks) manually to ensure phasing is eliminated for omni sources not handled discretely by assigning audio files into a new custom ‘Center Channel’ class.

So, what I am doing at the moment got a little complicated. I’ll try to break it down:

I spatialize audio using my own spatializer plugin. This plugin attaches some samples that contain the location information to the current buffer when its audio callback is called. This information is just extra samples attached to the sound buffer (quite hacky). This information is later used by the SoundfieldEncoderStream.

Then I send this to a custom SoundfieldEndpoint. This means I had to implement a custom Soundfield Format. That includes implementing: ISoundfieldEncoderStream, ISoundfieldEncodingSettingsProxy, ISoundfieldEndpointSettingsProxy, ISoundfieldMixerStream, ISoundfieldEndpoint, ISoundfieldTranscodeStream and ISoundfieldAudioPacket.
When you register your own SoundfieldEndpoint with your own SoundfieldPacket you can do whatever you want with it. The problem ist you have to do it yourself. I use RtAudio (already available in UE source) to output as many channels as I want. I output all of this using VBAP, but you could use different methods.

I don’t think this is a viable method if one just wants to output some audio on many speakers. It’s quite complicated and you have to keep two audio callbacks (SoundfieldEndpoint and RtAudio) in sync.

Thanks for the reply. So is your output now and AmbiX formatted output of four channels for 1st order B Format, or even a 2nd order eight channel?