Depends on what you mean by “using a MetaSound for a Submix”! To be more clear: you can control the volume/filter/etc. on a normal Submix already, via its effect chain.
One of the major differences between Submixes and Sound Classes is that Submixes act on already mixed audio. I.e., when you add and alter a filter to a Submix, or change its volume, you are applying that processing exactly once, to the already mixed sum of all audio assets sent to that Submix. When you are using Sound Classes, what it does is change the filter/volume/etc. of every single source assigned to that Sound Class, individually. I.e., applying changes on lots of different sound sources via Submix is often far less CPU intensive than Sound Classes. Sound Classes do have some value in terms of grouping like sounds together, but in most cases, if you’re doing processing than can be done after the sources are spatialized and mixed together, I’d recommend Submixes.
In terms of controlling a whole group of sounds via MetaSounds: honestly, I usually recommend either giving each relevant Sound Source access to the same modulation control buses (so changing the source modulation will change the parameters on those Sound Sources simultaneously), or sending the MetaSounds to a standard Submix and altering the DSP parameters there. I’d doublecheck if either of those work, first. It depends a bit on where you need that processing to happen in the DSP chain. For stuff like pitch, usually you’re going to want to do that per source. For stuff like reverb, usually you’re going to want to do that on the Submix.
So, Submix processing happens after MetaSounds, so you can’t alter an Unreal-style Submix in a MetaSound. (That said, while I admit it’s been a while since I used Modulation, I can’t think of a reason you wouldn’t be able to alter Submix parameters in Blueprints via querying a control bus’s parameters). So if you really, really want to control a bunch of sounds at once in a MetaSound, what you’re going to need to do is have a MetaSound Source that mixes all the relevant sounds, and then does it’s processing. Which does mean, yes, it would need to be actively playing all the time. It’ll probably also need to own each of the sounds you want it to control and handle spatialization of each of them itself, because to my knowledge we don’t currently have a pipeline for routing audio from an Audio Component into the Audio input of a MetaSound (but now that I say that, we should make that, that’d be sweet).
Tl;dr, your main options are:
- Use the same control bus for all MetaSounds you want to alter. Good for stuff that’s best done per source, like pitch.
- Have all relevant MetaSound send to the same Submix, and alter the parameters on the Submix effect chain via Blueprints. Good for stuff that can easily be done on an already mixed stream, like volume, or things that are very expensive to do per source, like reverb.
- If you’re doing something really fancy, you could have a single MetaSound mix the outputs of the relevant MetaSounds for you. Because of where MetaSound rendering happens in the pipeline, this means you would have to find a way to handle stuff like the location of individual sound sources yourself. Its not impossible, but its probably overkill if you’re mostly looking to do stuff like alter filter params.