Maximum Number of Sounds?

Is there a place to look for the maximum number of sounds that can be playing simultaneously and whether this is exceeding a hard engine parameter or situational to the device. People (myself included) hit this limit at times, but I don’t find a specific number anywhere or what exactly one does to reduce the number (like mixing outputs or ?). I’m especially interested in this as it relates to multiple metasounds though it seems like usually it’s concerning conventional classes.

I was just starting on a MetaSoundSource to handle numerous One-shots in a single instance and realized, this might not solve the problem until I fully define what the problem is and if it’s consistent.

settings/engine/audio/quality/max channels ?

1 Like

That is the solution so I will mark it is a such.

But an another important part would be to know if mixing down prior to output (e.g., within MetaSounds) reduces the number of channels you actually use.

The thought I had is, that if I had, say a common instance of a metasound handling a large number of sounds for a specific application (e.g., gunshot sounds), would the fact that they would all be mixed into one two channel output, reduce the channel footprint, or would the fact that I am using numerous wavs to produce that output defeat that approach?

1 Like

I would love to know this too…

I did come to the answer to the issue of a single MetaSound mixing down to where it ties up only a single channel. For example, I have all of player sounds (guns, footsteps, weather, music etc, those things normally thought of as 2D sounds) in a single MetaSound and it takes up only a single slot when looked at with Console Command au.debug.soundwaves 1. In a similar manner, I have AIs also set up with their own single MetaSound for Gunshots, footsteps, dialog, and any sound that emanates from them in an attentuated/spatialized manner. This again means a single AI will tie up only one channel.

Bottom line is that if you have an application where several, even unrelated, sounds are played by an actor, they need only take up a single “Channel” as defined in the solution post above.

There are additional things to think about when doing this.

  1. If a sound needs panning but gets grouped with several sounds that aren’t panned, it must be manually panned within the MetaSound. An example is players footsteps. Even though they are lumped in with 2D sounds, the left and right foot step must be manually biased to the appropriate ear.

  2. If sounds are grouped under an AI using the same grouping method to save channels, it may be necessary to destroy/spawn it’s Audio Component and MetaSound dynamically. If this is not done, every AI will take up a slot/channel permanently, even when silent. With many AIs, this will still tie up channels and force the engine to occlude some sounds.

  3. The last one is similar to number 1. When you merge several sounds into a single MetaSound, it’s important to remember that that sound be fed Gain values so it can implement your gain controls within the MetaSound rather than depending on Submix. Example is allowing player granular control over Music, Effects, Dialog Gain settings. If a foot step sound (an Effect) and music sound (Music category) are to reside in the same MetaSound, that MetaSound must be told what these gain values are for it’s internal Stereo Mixer, because only a single Submix can be specified for that MetaSound and automatic gain controls won’t control the individual sounds of different gain categories.

The thing that confused me about trying to figure out how many slots/channels a sound was using, was the terminology. It appears from the perspective of the default 32 channel limit, the term channel has no bearing on the concept of stereo. Even though you produce a panned stereo output within a MetaSound, only a single channel is occupied. In other words, it’s not about Left and Right Channels, and a channel is really related to an active Audio Component, not a stereo channel. Heavy use of the au.debug.soundwaves 1, shows where you are at very clearly in this respect.

3 Likes

thank you so much eagletree!!! this helped me really much! I used the debug command you suggested and now understand that the sounds linger - even when silent and still take up slots/channels. Now Im just figuring out how to best destroy the sounds when they finish playing. Thank you so much for these detailed answers here! you help me and many others! :smiley:

Hello again :smiley: I now understand that ,On Finished" note should always be connected to ,On Finished" inside the Metasound if it is supposed to stop and go away after it has finished playing. I still have some problems with this. When I spawn the sound into the world I check ,Auto Destroy" and I read that this should make the soound go away and stop taking up extra slots when it is finished playing. For some reason this is not working for me. I always need to add delay as long as the sound and then use ,Is Valid" and then ,Stop" to make the Metasound go away from the world… I must be missing some basic understanding of this. Does anyone know what that might be?
Here are pictures:
I use ,on Finished" inside the Metasound:
image

I have Auto Destroy:
image

this is how I now stop the sound:

Sorry I don’t have a response for your use case. We use Spawn Sound Attached for all our metasounds. We use Spawn Sound at Location for our legacy sound cues. You might do a test to see if the sound finishes normally using the Spawn Sound Attached since that does seem to work for us.