Soundclass vs submix

Hi. Recently the team switched from an earlier version of Unreal to 4.27.

I’m a bit confused about soundclasses vs submixes.

I can see many cool aspects to the new submix system (DSP stuff), but then again , there is no way to apply passive mix modifiers to submixes (Although it could be built in BP fairly easily).

What is the role of submixes, now and in the future?
Is the submix system ment to make the soundclass system redundant? It seems to me like you could make an entire project without ever using soundclasses.

Would like to know a little more about this, so I know how to approach the whole situation (if there is a situation). Should I slowly start to migrate my system that is based on sound classes, to a system using submixes instead? (becuase the soundclass system will be unsupported in the future).

How do submixes differ from sound classes? Why was the whole DSP thing not just added to soundclasses instead of creating new submixes?
It seems we have 2 semi paralel systems (soundclasses and submixes) and they kind of step on each others toes. Where should I use one and when should I use the other?

Thanks:)

1 Like

Anyone?:slight_smile:

This thread here might cast more light on the future of soundclasses that to me seems to not disappear.

I can only speak from my limited experience working in Unreal but a fairly robust career in audio. It seems to me that the new audio approach is also trying to cater more for procedural approaches and that control buses can be used as “faders/tracks” for individual “instruments” that can then be grouped and mixed into Control bus mixes. The soundclasses and sound mixes can then be used to mix and control a number of different CB mixes. The analogy in mixing would be tracks grouped into aux returns (CB ->CBmix) and those track and return levels controlled by VCA faders (soundclass ->soundclass mix). Both the CBmixes and SCmixes can then be used as “snapshots” to quickly jump between mix states. Snapshot mixing is very common in live sound (FOH/monitor mixing) and post. Just my 2 cents.

Thanks. I would really like to know more about this. I am poking at UE5 and using metasound and it seems that you have the exact same thing: Soundclasses and submixes.

Would love to get a real world scenario that explains to me where and why I should use each.

A Soundclass is a way to categorise and organise groups of sounds to enable you to carry out actions to every sound in that class, for example all sounds in this class are affected by audio volumes.

A submix is more analogous to an aux send, and compiles multiple audio sources to a single output buffer - this means you can add dsp to multiple sounds at the same time

I dont think there has been a major difference between 4.27 and 5 in the way these work

I know in 5 i will use sound classes a lot less, as i previously used them as part of the mix process, but now will be using audio modulation

That being said, they are still useful, i just keep them to higher level categories

Might be worth noting that audio modulation can be accessed in the metasounds graph. This opens up for some pretty wild ways of working with the mix e.g., weapon manager where all gunshot, weapon sound levels are controlled from one C-bus mixer via one MS. You can also mix control buses and they can be much more than just audio levels.

1 Like

Cool. Will have to look into audio modulation more. Have never looked at it before. Any good YT videos? (will of course search myself as well).

And yes, having a weapons manager (or watever) within one Metasound, seems amazing! All sorts of cool weirdness could come from that :slight_smile:

Had a look at this video Audio Modulation Mixing in UE4.26 Preview 1 - YouTube
Which didn’t really offer any new stuff that you can’t do in sound classes already (but i’m sure you can, it just wasnt demonstrated).

Thanks :slight_smile:

Just a scratch on the surface but might be useful as a start.
UR5 Metasound weapons manager with control bus modulation

Thanks. Will have a look.
I can’t help but feel that, while metasounds is very cool, it is also overly complex. Unless you are making a “generate music inside the game” - game, I can’t see why things need to be so complex.

just my 2 cents…
But please keep the videos coming :slight_smile:

Hi again. How do you practically use the weapons manager? I plan on making several meta sound sources (all weapons), routed into a meta sound (not source I guess) and in that way create submixes (in this case a weapons submix, and not the normal submixes, but using a meta sound for a submix).
From here I could manipulate volume/filter etc on a whole group of sounds at the same time, via control bus modulation. But how do I practically do that? Do I need the meta sound submix running in the level along with the actual weapon sounds? I have tried different things, but nothing seems to work.

I could do the control bus modulation (volume/filter etc) on the weapons themselves, but then I end up having a filter on each weapon, which seems kind of overkill.

Thanks

1 Like

I guess it depends on the purpose of the filter. If it’s for attenuation I don’t think it’d work as the metasound would need to be attached to something in distance relation to the player so using a manager would be a bit redundant. Although if you’d have a turret (or gunsystem of some sort) which used different types of ammo i.e., different sounds then you could use a “manager” to control this. Although for attenuation you probably want to use the interface instead (there’s a Dan Reynolds vid on that).

I think Metasound is similar to Max or modular synthesis in that it offers you a variety of possibilities to solve a design but there’s no one specific way to do it. Whatever works for your game is the best solution.

Depends on what you mean by “using a MetaSound for a Submix”! To be more clear: you can control the volume/filter/etc. on a normal Submix already, via its effect chain.

One of the major differences between Submixes and Sound Classes is that Submixes act on already mixed audio. I.e., when you add and alter a filter to a Submix, or change its volume, you are applying that processing exactly once, to the already mixed sum of all audio assets sent to that Submix. When you are using Sound Classes, what it does is change the filter/volume/etc. of every single source assigned to that Sound Class, individually. I.e., applying changes on lots of different sound sources via Submix is often far less CPU intensive than Sound Classes. Sound Classes do have some value in terms of grouping like sounds together, but in most cases, if you’re doing processing than can be done after the sources are spatialized and mixed together, I’d recommend Submixes.

In terms of controlling a whole group of sounds via MetaSounds: honestly, I usually recommend either giving each relevant Sound Source access to the same modulation control buses (so changing the source modulation will change the parameters on those Sound Sources simultaneously), or sending the MetaSounds to a standard Submix and altering the DSP parameters there. I’d doublecheck if either of those work, first. It depends a bit on where you need that processing to happen in the DSP chain. For stuff like pitch, usually you’re going to want to do that per source. For stuff like reverb, usually you’re going to want to do that on the Submix.

So, Submix processing happens after MetaSounds, so you can’t alter an Unreal-style Submix in a MetaSound. (That said, while I admit it’s been a while since I used Modulation, I can’t think of a reason you wouldn’t be able to alter Submix parameters in Blueprints via querying a control bus’s parameters). So if you really, really want to control a bunch of sounds at once in a MetaSound, what you’re going to need to do is have a MetaSound Source that mixes all the relevant sounds, and then does it’s processing. Which does mean, yes, it would need to be actively playing all the time. It’ll probably also need to own each of the sounds you want it to control and handle spatialization of each of them itself, because to my knowledge we don’t currently have a pipeline for routing audio from an Audio Component into the Audio input of a MetaSound (but now that I say that, we should make that, that’d be sweet).

Tl;dr, your main options are:

  • Use the same control bus for all MetaSounds you want to alter. Good for stuff that’s best done per source, like pitch.
  • Have all relevant MetaSound send to the same Submix, and alter the parameters on the Submix effect chain via Blueprints. Good for stuff that can easily be done on an already mixed stream, like volume, or things that are very expensive to do per source, like reverb.
  • If you’re doing something really fancy, you could have a single MetaSound mix the outputs of the relevant MetaSounds for you. Because of where MetaSound rendering happens in the pipeline, this means you would have to find a way to handle stuff like the location of individual sound sources yourself. Its not impossible, but its probably overkill if you’re mostly looking to do stuff like alter filter params.
4 Likes

BlockquoteIn terms of controlling a whole group of sounds via MetaSounds

Not as a counter but just as small addition to this. For sounds that are attached to the player (footsteps, etc) I find it useful to have the metasound sources in a separate metasound “manager” and control the levels with control buses. It’d be the same as having the modulators in the individual MS graphs but this way they’re in one place… Maybe more cosmetic but it helps me keep things tidy and it’s quicker to move things around.

2 Likes

Thanks:

Use the same control bus for all MetaSounds you want to alter. Good for stuff that’s best done per source, like pitch.

Guess this is the route I will take. The more expensive route (with filters etc on every source)

Have all relevant MetaSound send to the same Submix, and alter the parameters on the Submix effect chain via Blueprints. Good for stuff that can easily be done on an already mixed stream, like volume, or things that are very expensive to do per source, like reverb.

The problem here, is that i can’t seem to audition filters etc inside the editor. I have to press play, before I can hear filters etc. I would like to play a sound and apply filtering and hear it at the same time, without hitting play every time

  • if you’re doing something really fancy, you could have a single MetaSound mix the outputs of the relevant MetaSounds for you. Because of where MetaSound rendering happens in the pipeline, this means you would have to find a way to handle stuff like the location of individual sound sources yourself. Its not impossible, but its probably overkill if you’re mostly looking to do stuff like alter filter params.

yeah, seems to big of a task

All in all, thanks for the input:) Will probably go with the first method, since I can audition it realtime, in the editor without hitting play. Even though it is more expensive CPU wise. But since I have always been using sound classes, and they pretty much use the same approach, then I guess CPU power will never be an issue.

Thanks :slight_smile:

Oh, you can preview filters that way, though! I assume you’re talking about the fact that previewing a sound outside of PIE, filters from Submixes etc. aren’t applied? That’s the functionality a lot of people prefer so they can hear the raw wave, but you can alter that on your end pretty quickly. There should be a section in your SoundWave.cpp file that looks kind of like this:

// Copy over the source bus send and data
	if (!WaveInstance->ActiveSound->bIsPreviewSound)
	{
		//Parse the parameters of the wave instance
		WaveInstance->bEnableBusSends = ParseParams.bEnableBusSends;
		
		// HRTF rendering doesn't render their output on the base submix
		if (!((WaveInstance->SpatializationMethod == SPATIALIZATION_HRTF) && (WaveInstance->bSpatializationIsExternalSend)))
		{
			if (ActiveSound.bHasActiveMainSubmixOutputOverride)
			{
				WaveInstance->bEnableBaseSubmix = ActiveSound.bEnableMainSubmixOutputOverride;
			}
			else
			{
				WaveInstance->bEnableBaseSubmix = ParseParams.bEnableBaseSubmix;
			}
		}
		else
		{
			WaveInstance->bEnableBaseSubmix = false;
		}
		WaveInstance->bEnableSubmixSends = ParseParams.bEnableSubmixSends;

		// Active sounds can override their enablement behavior via audio components
		if (ActiveSound.bHasActiveBusSendRoutingOverride)
		{
			WaveInstance->bEnableBusSends = ActiveSound.bEnableBusSendRoutingOverride;
		}

		if (ActiveSound.bHasActiveSubmixSendRoutingOverride)
		{
			WaveInstance->bEnableSubmixSends = ActiveSound.bEnableSubmixSendRoutingOverride;
		}
	}
	else //if this is a preview sound, ignore sends and only play the base submix
	{
		WaveInstance->bEnableBaseSubmix = true;
	}

You should be able to just make that treat sounds labeled bIsPreview behave the same as they do in PIE. (If you mean you’re having trouble with RTA, that’s a little more elaborate, I’m guessing something’s not getting reflected somewhere. But, you can change the Submix effect parameters during play and hear the immediate effect there, or at least you can on my end. A lot of people I know make a little Audio Sublevel specifically for auditioning stuff like that, it’s not as if you need to stop the PIE session for every Submix change you make.)

1 Like

Hi. Thanks:) I probably didn’t explain myself correctly (or maybe I misunderstand your answer) :slight_smile:
I would like to be able to audition filters(distance based)/volume/submix effects (including non distance based filters), without having to hit play in the unreal editor (PIE).

What I am trying to achieve:
I have an audio programmer and I would love for him to make this functionality:
Trigger mix settings (submix levels/effect settings etc) from within the editor NOT being in PIE mode.

Those mix settings I would have in an array and name them.
Then have one mix for “Player under water”. Another mix for “Player low health” etc.
And then be able to activate the “under water” mix setting and play any sound in the project and be able to hear how specific sound would sound when the “under water” mix is applied.
(And then of course apply them at certain places/under certain conditions in game).

Can that be done with submixes? Would love to go the submix route, since that seems to be the “correct” way to do things (applying effects on groups in stead of every source).

But if it can´t be auditioned in editor without going PIE, then I will have to go with the modulation bus route instead. Because here, I do have the option to audition stuff without doing PIE, by using the control bus mix. (here you can activate/deactive several mix snapshots).
This again, seems ok, since apparently this is the way sound classes have been working all the time, but would still prefer the submix route.

Thanks a lot for replying :slight_smile:

@lantz.anna_1
Continued:
I tested and I can audition filtering/levels on submixes etc real time without pressing PIE. Cool!
But the problem is that I would like to have several “snapshots” (or arrays) of settings for different scenarios. For instance have a “under water” submix filter setting, and audition that as well.

I would like to have a whole array with different settings for volume/filters etc and then test them without going PIE.

Example:

I have many different scenarios in the game.
-One could be "down state “when the player is dead”= muffling several sounds by applying filters to several specific submixes. I also turn down (volume) some submixes and turn up other submixes.

-Another one could be “under water”= muffling other submixes by applying a different type of filtering to several other submixes. I also turn down (volume) some submixes and turn up other submixes.

All the filtering (or whatever), will be done via submix presets on specific submix groups.

So i want to be able to audition the “under water” scenario and the “down state” scenario without going into PIE.

How would i achieve that? I could build a BP and have all the functionality int it (different submix states - underwater and down state), but I don’t know if I can run the specific BP without actually running the game first. And that prohibits me from auditioning submix effect presets without going PIE.

A completely different way of doing this, would be via the control bus mix, that I can activate etc without going PIE (it works fine now), but then I have to have connections to it on each meta sound source etc.

Thanks!

Well, I honestly can’t think of any way to do that outside of PIE. You do need PIE to run Blueprints. But, I think the more pertinent question might be talking about why you want to avoid going into PIE in the first place? That’s a really big workflow and performance change for avoiding a button click - I want to doublecheck there’s not some preconceptions about PIE that are driving that decision, or something there might be another workaround for.

I do want to emphasize that you do not have to start and restart PIE for each change to existing Submixes - you can alter the Submix settings and effect chains while still in PIE, hear the change immediately, and those changes will persist after stopping PIE. Is this about the sheer number of sounds you want to be able to test with these different settings? What I sometimes see people do is change the Sound on the running Audio Component - you can also do this during PIE, via the World Outliner. This lets you switch to any sound in the project, without stopping the PIE session, while these Submix presets are still active.

1 Like

@lantz.anna_1
True, I place a sound in the world and swap it out, while PIE. Seems like an ok workaround.

It would be a lot more simple though, to just be able to press play on a sound, and then change via a preset handler for the submix settings and hear it straight away, instead of having to place it in world first.
Hope that functionality comes in the future.

But thanks for helping me out :slight_smile: We will see if I will go the control bus route or submix route.

Another question about submixes/sound classes:

In the old cue system (the legacy non-meta sound system), I had access to a sound class node. Here I could send to different sound classes, from with the cue, depending on different logic input parameters (a branch node for instance, within the cue). I used that to send a firing sound to either one sound class, if it was played by the actual player, or sending it to another class, if it was being played by anyone other than the player (a multiplayer scenario).

Then by using that system and passive sound mixes, I could duck the firing sound of all other players to make sure that I could always hear my own weapon the most.

How would I go about doing that with submixes?

From within a metasound, I don’t have access to different submixes or submix sends. So I can’t select, based on a variable, what submix the metasound should be sent to. It seems to be static?

Thanks

Thanks

Wanna clarify something real fast: you can still use Sound Cues! You will continue being able to use Sound Cues! There’s a decent chance we’ll make them optional at some point by putting them in a separate plugin, and we’re probably not going to spend much dev resources on improvements to Sound Cues or integrating them with newer functionality that comes along. But you will still be able to use them - so many projects already have Sound Cues they’re relying on, we can’t just ask developers manually remake all of them. The only case where we’d fully deprecate Sound Cues is if we got a tool that could convert all existing Sound Cues into MetaSounds, with all functionality maintained.

(For the record - if you’re curious why it sometimes seems like Unreal Audio Engine has multiple systems that seem semi-parallel, this is a large part of why! Because Unreal has a lot of users that are starting projects in many different branches with the expectation that moving to a later version is reasonable, it’s extremely difficult to fully remove functionality without wrecking users’ existing work. And especially because we use Unreal’s Audio Mixer for the games we make in house, we are very aware how much of a pain having to reimplement lots of assets would be. So we make decisions that don’t force people to rebuild things! Which means we sometimes end up with a fair amount of tech debt.

This is also part of what’s going on with Sound Classes vs. Submixes. Sound Classes have a couple deep architectural limitations that fundamentally prevents them from being expanded into the type of work Submixes can do. Most notably, Sound Classes do not do any mixing themselves nor do they actually receive audio, they just send messages to the sounds connected to them. But there are also some things Sound Classes do that we just don’t have fully implemented solutions for yet, such as setting things like loading behavior and channel routing for large groups of sounds at once. And again, projects were already using them, that we didn’t want to break.)

Now, there are a lot of things Sound Cues cannot do, because they’re limited by the game thread. So you cannot, for example, get sample accurate timing within a Sound Cue, and stuff like Delays and Loops will always have a bit of a time gap. But ya, Sound Class Nodes (and Attenuation Nodes and especially Quality Nodes) don’t have a direct equivalent in MetaSounds yet - those are pretty decent reasons to use Sound Cues in cases where you’re really reliant on that functionality. And you can set the Default Submix on your Sound Classes, too! I.e., you could do something like have a Player Sound Class with a Default Submix of Player Submix, and an Opponent Sound Class with a Default Submix of Opponent Submix. And then you could use the Sound Class Node to get your sounds into the right category, but still rely on their assigned Submixes to do the actual mixing and effects.

In terms of doing this with MetaSounds. So, you can edit the default Submix and the Submix sends of a MetaSound in the Source settings. But you’re right that we don’t have anything like a Sound Class Node in MetaSounds that would allow switching Submix Sends based on information in the MetaSound Graph. You could still alter the Submix Sends through Blueprints, however - you can override Submix Sends on the playing Audio Component. Something like “if X is true, then override the Audio Component’s Submix Sends to Blah” could work.

3 Likes