Soundfield Submix Endpoint: What is the clock basis for the incoming data streams?

Hi everyone,

I am working on a soundfield submix endpoint which allows to play higher order (N=4) ambisonics via ASIO soundcards.

I think that I have understood most of the involved functionalities. However, one aspect is not clear to me so far: the audio data enters the soundfield submix endpoint via the encoder, the transcoder or the mixer. Which clock drives the incoming data? I assumed that I may control the clock via the callbacks but neither in polling nor in callback mode, the engine seems to wait for any of the functions I implemented.Is it the default audio device involved in unreal that triggers new buffers to arrive in my endpoint? If this is the case, it may be required to synchronize default audio device and ASIO device :frowning:

Thank you for any assistance in advance and best regards


Hi there, I am doing something similar but I output audio using VBAP instead of HOA.
I write my samples to a RingBuffer in the SoundfieldEndpoint’s OnAudioCallback. I then read from that RingBuffer in an RtAudio callback (already available in the UE source code). This means I have to keep these two callbacks in synch, which is not the best but it works.

Hi GrobiThee,

in the very last stage, I am using a VBAP followed by some delay compensation and speaker equalization. The actual rendering is part of another audio engine which is connected out-of-process, therefore, the submix endpoint was the preferred choice. In your case, the RTAudio encapsulates ASIO also? Then, the clock is driven by your ASIO device? Is the OnAudioCallback not in sync with your RTAudio callback since it is the only audio device in the system? If not - what actually drives the OnAudioCallback in your case? There must be another clock source…

Hi everyone,

my higher order ambisonics playback works now. I have the encoder and the mixer realized as part of Unreal. Afterwards, my Unreal plugin transfers the ambisonics of order 4 (25 channels) to another rendering engine via multiprocess communication (socket) to be prepared for playback via an ASIO device with a lot of channels. My audio engine then does the VBAP, the speaker equalization and the delay alignment. It really works very well :-).

While finishing, I had the following findings:

  1. The clock for audio output in the submix endpoint is derived from the default audio device. Since my ASIO audio device and the default system audio device are of rather high quality, the clock drift is not really a problem. In order to consider the buffering jitter on both sides a jitter buffer with a size of 4 buffers (1024 samples per channel) catches away all inaccuracies.
  2. If a source is directed to my soundfield submix endpoint (encoder) in Unreal, the relative position is not reported with real position data until an attenuation instance is attached: If a stereo source is rendered and attenuation is “off” an azimuth angle of 90 and 270 degree is reported (in degree), and for a mono source, the azimuth angle is 0. If an attenuation is attached, all angles are reported according to the current scenario - but all values in radian. The report of azimuth in degree with attenuation off seems to be a bug in Unreal.
  3. The elevation and azimuth angles are reported in a really strange way. I am sure that there are good reasons for this kind of directivity computation but i had to derive new values of “real” azimuth and elevation to match this to my spatial audio rendering engine.

If anyone is interested to learn about details regarding the soundfield submix endpoint, you are welcome to contact me.

Best regards