I have a few questions regarding the maximum of concurrent audio channels, you can set in the audio section of the project settings.
What is a healthy max number I can allow these days for a game developed for PC? Right now we have it on 32 (default), but since the game is a huge crowded battle game I’d like to increase that number to have more room to play around.
The second question is: Which one is true:
a) A soundcue counts one audio channel towards the maximum, regardless of how many wavefiles played inside this cue at the same time
b) A soundcue counts X audio channels towards the maximum, while X is the number of each wavefile used at the same time inside this cue.
If b) is true: Does a stereo wave counts for two audio channels, while mono counts one?
Thank you in advance for your answer
You can use “stat soundwaves” to look at the number of sounds which are taking up the voice count. It’s not the greatest debugging tool (it’s in our backlog to create more robust audio profiling tools), but it gives you a ballpark idea of what’s going on.
Unfortunately, when you play a sound cue with a sound mix node, for example, with 2 wave players, it’s not actually “mixing” the sounds together: it’s playing both wave players at the same time and takes up 2 voices.
Stereo sounds count as one voice.
Think of a “voice” as a discrete channel of audio that has various parameters that are applied to it: volume, pitch, position, etc.
Thank you very much. This answers at least my second question
Oh sorry, for the first question. This is a debated topic in game audio. Personally, I think 32 is sufficient if you use concurrency, mixing, and sound source priority properly. I can see an argument for VR games using more voices to have more articulated sound design (smaller sounds, more sound sources) for the extra presence you have in VR. The problem in game audio engines with using a lot of voices is that the process of mixing sources together technically introduces noise. The noise-floor of each source is accumulated. As the number of (decorrelated) sources approaches infinity, the output mixed buffer is going to approach white noise. So, somewhere along the line, having more sources isn’t just a CPU issue, but is a noise-level, mixing, and dynamic range issue. In other words, it’s an aesthetic issue.
“Correlation” just means that the sources aren’t the same so you can ignore the issue of clipping as you add them. If you play, for example, the exact same sample at the exact same time (i.e. they’re correlation is technically 1.0), you’ll add their samples together in pure constructive interference and likely cause digital clipping.
TL;DR – there is no answer to your first question. It’s up to you.
Thanks a lot for the explanation and the answer.
The noise issue is one I am totally aware of. Typical audio thing. I did a couple of experiments regarding this to see how it adds up and to get a good feeling of where to set the concurrency for each sound.
After some test runs today, I decided to go with 64 for the moment, which I think will do, until the optimization process is done.
Again, thank you very much!