Best Practices for Avoiding Underruns in Custom Audio Code

Article written by Anna L.

Sometimes when adding custom DSP code or alterations to audio functionality in Unreal Engine, users experience an unexpectedly “buzzy” or “choppy” sound, or “clicks”/"pops” in their audio. This is often due to performance issues. In particular, there’s certainly a performance issue if you are also experiencing reports of audio buffer underruns in the logs. Seeing unexpected blocks of silence in the waveform can be another sign of buffer underruns, although this could also be due to issues such as an incorrect channel map, or an inaccurate frame count in signal processing code.

Audio generally needs to render very quickly, between ~40,000 and 48,000 samples per second. The speed is necessary to ensure all frequencies within an audible range can be captured - sampled audio can only represent frequencies that are less than half the sample rate, so a sample rate less than 40 kHz will have a smaller maximum possible frequency than 20 kHz (the upper end of frequencies that humans can perceive).

In the context of Unreal, insufficiently fast audio rendering leads to an issue called buffer underruns. As in most audio engines, Unreal renders audio in buffers of the size specified in the system settings. This means that in the case of a slow audio engine, the audio API may request a new buffer before the audio engine has calculated enough data for it. In practice, this means that while individual chunks of audio render at the correct sample rate, they can have gaps of 0 voltage between them, causing auditory bugs. Because of this, some code techniques that are reasonable, if not encouraged, in other areas can cause issues if introduced into audio rendering code, if it interferes with the speed of audio rendering.

Depending on how much audio rendering or the program as a whole is underperforming, the auditory glitches can express themselves in different ways. Infrequent and short buffer underruns are often described as “pops,” and can be seen in the waveform as a sharp, unexpected jump to zero voltage. More constant underruns often result in a sound that’s described as “distorted” or “buzzy,” with more severe cases only having short bursts of audible sound, or being nearly unidentifiable as their original asset. What you aren’t likely to hear, however, is the decrease in pitch and speed that people sometimes associate with audio playing too slowly - this is because Unreal is rendering each individual buffer at the correct rate, but is having gaps between the buffers.

There are quite a few things that can lead to underruns in audio code, but the following are general guidelines:

  • An underrun anywhere in the engine often impacts the audio engine as well, so it’s worth keeping an eye on the general performance of the project through tools like Insights.
  • Use SIMD optimizations when possible, instead of performing operations sample by sample.
  • Pay attention to what thread you are on. The Audio Render Thread runs much faster than the Audio Thread or the Game Thread, and so signal processing and rendering code is often best performed there.
  • Avoid putting blocking calls on the Audio Render Thread. This includes I/O, memory allocations, mutexes that rely on code with unpredictable timing, or third-party calls that might be blocking internally.