Render sound to dedicated output channels of an audio device

Hi, is there a way to route or render sounds and submixes to dedicated output channels of a sound card / audio device?
Since the update to 4.25 an “EndpointSubmix” appears in the editor, which suggests that this might be possible soon.
In particular, it is about a possible application in a simulation laboratory with several loudspeakers where an automatic mixing to stereo, 5.1 etc. is not desired.

Hi @davidmunich There have been multiple posts of people asking this same question in different forms since at least April of last year. Unfortunately nobody seems to know how, or it’s has been impossible up to this point. “EndpointSubmix” seems to be a rough solution for many different use cases if you can figure out how to set it up. At the moment there’s no examples or tutorials on that.
If you figure it out, please post an update here. It’d be very helpful for everyone looking to solve this.

Thanks Jay!

In the engine the EndpointSubmix is described as:
“Sound Submix class meant for sending audio to an external endpoint, such as controller haptics or an additional audio device.”

So according to that it seems that this would be the way to go.
Since it is exposed to the user in the editor I was assuming that it is a public feature and someone would know how to use it.

Hopefully that someone will give us an update here soon :slight_smile:

There are probably a bunch of ways to approach this. In cases of complex installations, like say a large art installation, it may be more prudent to roll your own Spatialization Plugin which can more concisely distribute sound sources to the appropriate speaker positions. The Spatialization plugin allows a fire-and-forget mode which bypasses the normal Submix rendering pathway for Spatialization systems that require the original source buffers and source data to do its thing.

On the other hand, if you’re just trying to special case certain speakers in your system, then you could create a Submix Effect that simply routes its non-spatialized input to specific speakers.

Both of these cases are outside of the traditional rendering pathway, and while they’re not impossible, they will require some programming skills.

@dan.reynolds Minus_Kelvin also suggested Submix Effect approach at answer hub: https://answers.unrealengine.com/que…namically.html

I’m not sure what do you mean by “simply routs”. As far as I understand, you need to initialize platform specific audio device api and render Submix Effect’s buffer to audio device using that api.

First I tried it with WASAPI and IMMDevice api, but I couldn’t figure out a proper way to sync Submix Effect buffer with IMMDevice buffer.
Then I tried XAudio2 api, but it failed to render any sound on new device. I reckon it might had something to do with XAudio hardware already being initialized by the Unreal Engine’s implementation.
Then I found out about “new” Windows Runtime AudioGraph Class. Which I’m going to try at some point.

The main problem, that I think I have, is syncing up buffers without any sort of ring buffer. I’m trying to make it work with Submix Effect pushing its buffer, whereas most if not all of those APIs are pulling on their own accord, providing a callback to fill them up.

In Jay’s link I’d like to highlight this response from Minus_Kelvin (who I assume is Aaron McLeran) to clarify the future role of the EndpointSubmix:
“In 4.25 we have the concept of an “endpoint submix” which is where we are going to implement this, hopefully in 4.26, as a fundamental feature of submixes”

So if I understood it correctly, the EndpointSubmix currently is not more than a placeholder where it will be possible around the 4.26 release to assign dedicated output channels and different audio devices.

As far as I understand from available information and from digging through the code, and trying to make it work myself, Endpoint Submix provides an interface to user to select options based on which your implementation routs/does something with audio stream.

  • In code you specify options, which may be a number of channels, send level, name of the device etc.
  • These options are exposed to the Editor and blueprints. I think you can then expose them to the end user in a built project as well.
  • In code you implement routing to endpoint devices based on those options. At the moment you have to deal with low level platform specific API (WASAPI, XAuidio2, CoreAudio etc.) in order to render audio stream to a device, or to implement some sort of haptic feedback based on the audio and selected options.

At the moment it looks like an abstract interface for many different use cases, if you’re able to extend it to your needs.
Even though a functionality to switch output audio device already implemented in the engine and has been for a long time, it is only for the case when Default system device changes, to hot switch to the new Default system device.

Endpoint Submix does seem like a placeholder, or more like a foundation for a new set of features, that’s necessary to place before building the walls. You can achieve similar results with Submix Effect.

I think I’m not 100% clear on what you’re creating. Some kind of sound installation?

If you have a 7.1 audio device, the audio output per speaker is a discrete audio channel. Just because it’s a 7.1 WDM doesn’t mean you have to treat it like anything but 8 audio channels. They can go to whatever speaker you want outside your computer.

Submix Effects are pretty easy to create and it’d be simple to create submixes that mix their audio into a single channel. Knowing your output hardware configuration, you could anticipate 8 channels, etc. Literally giving you 8 submixes, one for each channel, all feeding into the Master Submix.

Then any sound you wanted to play out of a specific speaker, you could just route that sound to the corresponding Submix.

That’s what I did. I used WinRT to render to a specific set of outputs, feeding its ring buffer from Submix Effect. It turned out a bit finicky, when switching devices it crackles a bit. I’m guessing my ring buffer handling is a little messed up.

But anyways, I suggest taking a look at WinRT API if you’re on Windows. Available documentations is mostly for C#, but it applies almost one-to-one to C++.
https://docs.microsoft.com/en-us/win…nd-winrt-apis/

Hi everyone,

the release notes for the 4.26 preview make me hopeful that the problem described here has finally been addressed:

Audio Updates:

  • Dynamic Speaker Map Control for 2D Sounds. With this feature, 2D (non-spatialized) sounds can be controlled from a Blueprint to determine how source channels map to output channels, specifically speaker mapping.

Has anyone already discovered the feature in the editor and can tell me where I can activate it to test it?

Hi @davidmunich,

I haven’t tried it, but it sounds like you’re able to specify the type of speaker to map your 2D sound channels to, meaning whether it goes to L, R, LS or RS etc. not necessarily a specific speaker device of your system. But I might be wrong here.

This seems to be correct but as @dan.reynolds has mentioned these are just eight audio channels using the standard Windows API speaker assignment. The cool thing is you can record these directly to a WAV file with eight channels. In fact my Motherboard has 7.1 that I can activate without having speakers. If you don’t have an audio card (most modern MB support 7.1 audio) you can use another technique with a small driver/app called ‘Virtual Audio Cable’ and assign eight virtual channels that Unreal will recognize as a Windows audio 7.1 audio device because ‘Virtual Audio Cable’ is also using the Windows Audio API labeling convention. Sadly I was hoping to fool Unreal into believing that if I created 12 virtual audio output channels it would recognize it as 7.1.4 for height channel speaker placement allowing for Dolby Atmos like experience. Sadly, Windows 10 API does not support more than eight channels and not sure if Unreal recognizes channel assignment if you have a Dolby Atmos receiver installed for 5.1.4 or 7.1.2 or 7.1.4 and not willing to spend a $1000 to find out at this moment if it does. If Dan or anyone at Unreal can confirm that height channels exist if you have a Dolby Atmos receiver hooked up that would be great since I am now having to create two passes in Sequence to derive height information. I do this by rotating the camera 90 degree (roll axis) and mapping these speakers ± height positions into my AmbiX encoder. So I am really working with six channels X/Z (azimuth) and six channels Y (elevation), plus two channels for 0/0/0 axis for center/OMNI and LRE (with low pass filter applied). I do this because I don’t want to have recreate my soundfield twice when I create stereo 360 videos for YouTube/Facebook. It works great except for the slight timing issue in having to create two passes that I have to manually line up in Adobe Premiere or DAW (Reaper). If it is confirmed that Unreal will produce height channels in the WAV file, I will purchased a AV receiver just for capability to produce a discrete channels of the 3D audio field. Ultimately I would love someone who has better knowledge of audio API and C++ to create Ambisonic 2nd order WAV file.

Update: I purchased a refurbished LG SN7R 5.1.2 Channel Soundbar with Dolby Atmos to test the possibility of Unreal sending out proper channel assignment. If it works, I will at least have two L/R height channels in the WAV out file.

Update 2/22: Received LG SN7R and I am now able to see Dolby Atmos and I do render 8 channels in the WAV file, but sadly the files appear as standard 7.1 audio tracks, instead of two height channels. I compared this to standard 7.1 output WAV file with the same track outputs. So it appears Unreal cannot recognize when a Dolby Atmos HDMI device is selected other than as a standard 7.1 surround channels.

Hi mebalzer,

thank you for picking up this topic.
Regarding my initial question - did you find a way to map sounds to dedicated speakers?
I’d appreciate a hint how this can be implemented in the editor.

Thanks
David

I think there is news about it in unreal 5.1

https://portal.productboard.com/epicgames/1-unreal-engine-public-roadmap/c/858-multichannel-audio-output?utm_medium=social&utm_source=portal_share

Hi, so after investing a whole morning into getting individual 8 channels output from a 7.1 metasound I believe I found a solution. You need to tell the OS what to do with a multichannel stream from Unreal. In my case I have a MOTU 1248 AVB and I am on Windows, so this applies for Windows only at the moment. The work must be done outside of Unreal. The trick is going into system settings and set the output mode for each application. I have set MOTU 1-24 as output from Unreal, and now Unreal sends 8 channels to the first 8 outputs available from the interface. Luckily MOTU has a very good Matrix routing software, and from there I can map the channels to the physical outputs. Note that I have not found anyway of mapping outputs in a Submix. It will default to whatever “master” submix you use in your project settings.

Cattura

Hey there !

A bit late to the party, but if you’re still around, I’m stuck on something quite similar. Your solution seems like a good path to follow. I was wondering tho, did you try it on a packaged build ? Or was it just in editor mode ?

And also, did you have to set up something specific in Unreal to have it using 8 channels ? Or was it sending by default to those 8 buses ?

Many thanks

Sorry for the delay, I tried only in editor. Since the MOTU is not seen from the OS as a possible standard (quad or 5.1 or 7.1) output solution in the above mentioned case I used a 7.1 output from some metasounds and did a self made spatialization system based on the spatialization interface. Sounds complicated but I had only 5 metasounds o it was quite simple (just some maths). If your OS sees your external interface as a possible solution (check in the OS audio devices properties) for multichannel audio, then the OS will decode the audoo stream from UE based on the number of output channels.