FAQ: Audio

Dec 2, 2020.Knowledge

What is the Audio Mixer, and how do I enable it?

  • The Audio Mixer is Unreal Engine’s native audio renderer, and its full documentation can be found here.
  • The Audio Mixer is enabled by default in UE4.24 and later versions.
  • For earlier versions, it can be enabled through placing either UseAudioMixer=true (for 4.22 and later versions), or AudioMixerModuleName= (for earlier versions), under [Audio] in the Engine.ini files for your project’s supported platforms.
  • The full list of platform specific Audio Mixer module names are as follows:
    • Windows: AudioMixerXAudio2
    • Mac: AudioMixerAudioUnit
    • Linux: AudioMixerSDL
    • Android: AudioMixerAndroid
    • iOS: AudioMixerAudioUnit
    • HTML5: AudioMixerSDL
    • Switch : SwitchAudioMixer
    • Xbox: AudioMixerXAudio2
    • PS4 : AudioMixerAudioOut

What are some useful audio features I might not know about?

My audio sounds distorted, buzzy, or has unexpected pops and clicks. What’s going on?

  • This is most commonly caused by hitches, e.g., something is slowing down your project’s performance to the extent that the audio can’t render fast enough to avoid audible glitches.
  • If this is the case, it is often visible in the debug log as repeated warnings of “Waited [x] ms for audio thread.”
  • Some other possible causes include:
    • Heavy clipping or unreasonable gain staging.
      • This can be mitigated by altering the headroom gain reductions defined in each platform’s Engine.ini file through setting [Audio] PlatformHeadroomDB to the desired value.
    • Inaccurate float to integer conversions, often due to integer wrap-around on extremely loud sound sources.

A set of related sounds in my project suddenly sound muffled, or are unexpectedly muted. What’s going on?

  • This is most commonly caused by sounds being processed with an unexpected or unwanted Low Pass Filter, such as those involved in Sound Class Mixes, Ambient Zones, or Occlusion and Attenuation.
  • You can debug this by running the console commands that selectively disable potential culprits, or display additional information about them. Some good examples can include “au.DisableDistanceAttenuation 1” or “au.Debug.SoundMixes 1

I’m calling some audio code in C++, but now my project crashes shortly after a sound ends. What’s going on?

  • This is most commonly caused by an audio UObject being Garbage Collected while a reference to it still exists in the Audio Render Thread. Custom C++ code can introduce more potential sources of error for UObjects to be Garbage Collected at thread-unsafe times.
  • The main methods for preventing this are to avoid using UObjects directly on the Audio Render Thread, and to utilize techniques that will keep a reference to any Audio UObjects alive, such as by attaching it to a UPROPERTY or a TStrongPointer.

Is Sound Field Rendering supported?

  • As of 4.25, Unreal offers support for first-order ambisonic assets. These can be used similarly to a remote rendering spatialization technique, where you give it object data through a submix in order to encode and render it in a given ambisonics format.
  • We can also decode assets that were recorded in ambisonics format, and use them as we would a regular Sound Wave, with the added functionality of being able to rotate the sound. As ambisonics-based spatialization works independently of speaker setup, this system helps with multi-platform support.

Is there a way to duck based on signal?

  • Side chaining is coming in 4.26. In the reverb demo specifically, we used side-chaining at control rate, as opposed to audio rate. We’re implementing this through Audio Buses (when the source is not sonified) and Source Buses (when the source is audible to the user).
  • Think of these buses like patch cables.
  • You can duck via sending the desired signal to an Audio Bus, and then setting it as the External Audio Bus in a Dynamics Processor Submix Effect in order to drive a compressor using an arbitrary signal. You can similarly accomplish this form of Sidechaining through a Submix, by setting Key Source to Submix with a Dynamics Processor, and inserting the desired Submix into the External Submix parameter.

What about codec problems?

  • Audio codec settings are currently hidden from sound designers.
  • We are planning to have a tool which allows designers to choose their sound quality, and to define what compression and decompression algorithms they want to use and in what circumstances.

What are methods of modulation for UE4?

  • UE4 contains an Audio Modulation plugin that replaces the functionality of the sound class / sound mix graph. It was released in Beta in 4.25, and is being refactored for 4.26. You can find its current documentation here .
  • You can also use Unreal’s OSC implementation to communicate with a Fader port. You can mix the game live on the Fader port, and the workflow iteration was very fast when we did this for FN.