I now have a music manager, weather manager and player 2d sound manager, all in MetaSoundSources. If these are merged in one MetaSoundSource and run through a single Stereo Mixer(3), will this count as 2 channels against the 32 channel limit or will it count as the number of Wave Player nodes outputs (or something even different than those two)?
Background of question:
What I’m trying to determine is how that channel maximum is incurred. If it were by the number of channels in all the waves being mixed down, then I’d obviously have a large number of channels occupied whereas if the count is based on the number of audio outputs (from the mixer) I’d only have 2. I have a guess, but I really need an evidential answer to this so I don’t consolidate functionally dissimilar MetaSoundSources without value as to meeting concurrency limitations of the engine.
Text deleted here by OP due to further information as below.
This isn’t supposed to be how it works, a Metasound output should take a single voice only, do it looks like a different issue, or a very problems bug!
By that, are you using “voice” to mean “channel”, “sound”, or the musical term that signifies multiple notes played simultaneously (e.g., C, E, G to produce a C major chord)? The terminology is entire problem I have with understanding the impacts of my work on concurrency. As a result, I’m quite curious concerning your response.
My assumption thus far:
If there are 32 channels possible, and a stereo channel occupies 2 channels, then for example a Stereo Mixer (3) in a MetaSoundSource should take two of those “channels” as a single “sound”. If one had 16 such MetaSoundSources playing simultaneously, one would begin to have dropouts of some sounds. A MetaSoundSource is treated seemingly synonymously with a single SoundWave (if looked at with au.DebugSoundWaves 1). Sound waves are shown as using either 1 channel (for mono) or 2 channels (for stereo), according to the NumChannels setting on the SoundWave.
If the above is inaccurate, it is exactly what I am trying to glean from the varying descriptions I find of how concurrency works in the engine. If your comment indicates that 32 Stereo MetaSoundSources can simultaneously be playing without dropouts, this would be a significant factor in ones design of sound management, so please let me know. It’s the crux of what I’m trying to find out.
Voice and channel are used interchangeably, albeit confusingly
If you have 32 channels/voices, you can concurrently play 32 instances of audio that require an individual voice
A Metasound Source Should take a single voice. As far as I’m aware, that entire Metasound is rendered into a single voice, no matter the amount of Waveplayers or generators inside the MSS. I’m sure there is a cost to complex, many layered MSS, but that’s a different point!
That sounds fairly definitive then. One can have 32 stereo MetaSoundSources playing simultaneously without loss. At 33 something will drop out (depending on factors beyond scope that may change the 32 ceiling).
I think it’s an important point and the docs get lost in differing terminology (to me anyway). As an example, “channels” is used in Project Settings for the max while in SoundWaves, “channels” indicates stereo vs mono.