Metasound:Controlling length of ducking and distance

Hi.
Lets say I have a weapon that is firing. I also have someone speaking at the same time.
The weapon has a long tail. The weapon is firing every 4 seconds.
I want to duck the speak each time the weapon is firing, but only during the first second or so. After that the ducking effect should fade out over 0.5 sec and returning to normal (to then be triggered again after 4 seconds or whatever).

With the normal audio cues/soundclasses/mixmodifiers, I know how to do this. But How would I go about doing that in metasounds?

I can use a ControlBusMix, but then I need to activate a specific controlbusmix, every time I fire a sound. And keep it active for a second and stop it and restart it every time it start firing. That woudl require that I set up a blueprint to do that. I want to just stay inside metasound to do this.
How would I do that?

And what about distance? I would love to scale the ducking over distance, so for example explosions that happen far away, don’t duck other things as much as close explosion.
I could do that just by using sidechain compression on submixes, but I can’t time those and control for how long they will actually duck things. It is always timed to the sound itself (because of the nature of side chaining).

Is there a way to do this with metasounds?

Would love if we could have assignable outputs in metasounds, along with access to attenuation as a node. That way I could have 2 outputs. The first, could have a given attenuation and could go to 1 submix with actual audio and another output, with another attenuation, could go to another submix, which could handle side chain signals. That way, I could easily solve this, by outputting the actual firing sound to the first submix and then, in the same metasound, have a “beep” or whatevr processed via an adsr,going to the second output.
Thanks

@ebuch Maybe you have some suggestions with that clever brain of yours? :slight_smile: :grinning:

I think in order for this to work without resorting to Blueprints, you would have to have both your weapon sound and your dialogue sounds contained in the same MetaSoundSource asset.

It could look something like this:

That would make perfect sense, except with that solution, both weapon and speak are sharing the same output and thereby the same world location.

I really hope they split outputs up in the future, so you can send audio from one meta sound to another.

But thanks @ebuch !
Let me know if you have any other thoughts:)