Metasounds suggestion thread

Hi.
I thought it would be nice to have a suggestion thread for metasounds.

I have been looking into it and so far, this is what I am missing to be able to do work in metasounds (as far as I know):

An important one for me:
-ability to send values to and from other MetaSounds. Not audio necessarily, but values (Adsr values etc).

-Attenuation node: to branch audio out and have some go to one specific attenuation and another part of the sound, go to another attenuation.
-Submix node: to branch audio out and have some go to one specific submix and another part of the sound, go to another submix. This could also just be the ability to have as many audio outputs you want, in a metasound, and define to what submix each ouput should go to.
-Ability to manipulate incoming control bus values within the metasound. Right now, you can only get, not set.
-Ability to send variables out from metasound to BP.
-Distance parameter built into metasound, so you can get distance to the metasound location and do audio modulation and parameter modulation etc according to distance.

Feel free to add your own things :slight_smile:
And it would be lovely if some dev could chime in, just so we know that you have read this :slight_smile:

Addition:
-Ability to activate/deactive controlbusmixes from within metasounds (activate/deactivate controlbus node).
Not just be able to activate/deactivate them from BP.

-mimick distance behavior and account for attenuation settings: Let’s say I have an explosion sound in metasounds. I want to hear what it sounds like 1000meters away and at 1 meter away (to hear how the attenuation/stereospread feels). - IS ALREADY IN THERE, PLEASE IGNORE THIS, DEV :slight_smile:

I should have a slider in metasound, that could mimick/trigger real, in game distance behaviour, without me having to press play. Walk around the sound emitter. Stop. Tweak attenuation. Repeat forever until satisfied. Would love to do all of this from within metasounds.

-Spacialization/attenuation:
Ability to set custom curves for the “stereo spread” parameter. It should be modulated by distance, to be able to set something like this= At 1meter, stereospread=1000, at 2meters, stereospread=500 etc. Not just in a linear fashion as it is now.

-A curve node: Have a module that has float input and a curve input. The idea behind the node is that it takes a float input, for instance 1 linear value from 0 to 100.
You can then attach a curve to the second input, and that will convert from the normal linear curve input, to the the custom curve that you have drawn instead. So basically a curve converter. Great for setting custom crossfade curves etc. RIght now, everything is linear (or whatever the incoming float value is). There is no way to change it in metasounds.

-Submix send level node: Modulate/control submix send level from within a metasound
-Audio bus input node: Send a portion of your sound to a specific audio bus. Combine this with an attenuation node and you can hook it up to a compressor sidechain on a submix effect, and you have distance controlled ducking, along with time control (how long it should stay ducked for), because you could send only a portion of your metasound (maybe just an initial “smack” of the metasound" to do ducking on another submix. Would be amazing.

Hey @TRJ_Audio ,

Thank you so much for taking the time to make suggestions, and for coming back a few more times to add more :). I have read through it all and some of these things are already on our radar. It’s so helpful to get feedback to help steer what we work on first, so please keep it coming.

Cheers,
Grace ( UE Product Manager for Audio )

Awesome to know that a dev has seen this. I’m sure I will write more in the future :slight_smile:

Been thinking a bit more about using float sources (within a metasound) for pushing/popping mixes.

Scenario:
i’m making a 1st person shooter.
Every time a big explosion happens close to me, I want my weapon fire sound to be ducked, either in volume or in some eq setting. (submix level and/or submix effect preset setting).
If the explosion happens 20 meters away, i want it to only duck my weapon a tiny bit.

Currently, with sound classes, pushing a mix will only happen above a certain threshold (passive sound mix modifiers) and there is no way to scale the ducking dynamically over distances (for instance having an explosion ducking a lot when you are close and ducking a lot less when it is 20 meters away).

What I am really asking for is a better and refined/controlled way to dynamically push and pop mixes over distance. Something like the existing sound classes, but much better.

Something like this:

You have all your submixes with their volume value and submix effect presets. Let’s call that a Global Submix Effect Preset
You make a new submix preset ( a new type of submix preset that can store ALL submix settings, including submix effect presets) and you set all the new values (weapon firing submix to 0, explosion submix to 1, set weaponfiring submixeffectpreseteq to dip slighty at 2000khz). You then save that global submix preset.(saves volume and submix effect preset states) You then name it “OnExplosionGlobalSubmixPreset”.

You then drag that “OnExplosionGlobalSubmixPreset” into you explosion metasound.
On play, you call it (along with your explosion sound) and connect an ADSR to control the “gain” of the “OnExplosionGlobalSubmixPreset”. The reason for this, is that you only want the submixpreset to be activated the first 0.5 seconds of the actual explosion sound. You then connect this, still within the explosion meta sound, to a new node class. The Attenuation node. This is a node where you can select existing attenuations.

Then I could control the amount of ducking over distance, with the attenuation. I could even have occlusion and listener focus to further sweeten/tweak how the ducking works.

Makes sense?

Just my 2 cents:)
@GraceYen

1 Like

Another one:
Have the same flow as sound cuse, in the sense that if you double click somewhere on a node in the sound cue, you audition the sound at that place. Not only at the outputs.
Would be super useful in meta sounds as well. For instance double clicking a wave player should allow me to quickly audition a sound that is connected to it.

Another one:
Submixes.
Currently, when adjusting submix levels, it goes from -(whatever) to 0.

That means that you cannot go higher than unity gain, which means that you cant make submixes higher. Only lower.

So let’s say I have a number of submixes and want to control them as you would a normal mixer in a DAW. I can’t do it, because I can only attenuate. Not gain.

A partial workaround, is to set you submixes to -10 (to be able to turn up stuff now and then, from -10 to 0).

But then, If you have child submixes under that submix, they all get attenuated by minus 10, and if you have this going on on multiple children (children of children), Evereything will be lowered for each child submix (-10 on each parent submix). There is no unity gain, so things will be lowered for each child.
Not at all ideal and you end up with a very low signal at the master output, that you have no way of turning up.

A pretty big problem actually.
I wanted to use this approach, but now I have to revert back to the soundclass system (which will use more resources, since it is per cue control, and not per group control).

Also a simple “gain trim” function on each submix is needed. Normally, when mixing stuff, you want to have your faders set to unity, and then use a gain trim, that is Pre Fader, to get a rough mix. You can then use the faders to do fine adjustments.

If you don’t do this approeach, you end up painting yourself into a corner, fader wise, and can end up in situations, where you can’t turn up your faders anymore.
In a situation like this, you would use gthe “gain trim” to turn the overall signal down (or up).

Hope it makes sense.

I really like what you are doing on the audio engine. If you need any more input from me (I hold a degree in audio engineering and have been working with audio the past 12 years), I would love to give you more.
This engine is very powerful, but many things are lacking, before it is fully usable without using middleware (fmod etc), or having a programmer making custom stuff (and I’m sure there a many things about the engine and your plans, that I don’t fully understand yet).

@dan.reynolds
@GraceYen

Thanks :slight_smile:

-A way to audition distance within the metasound. Not only the logic that you hook up to a distance parameter (which you can do now), but things like attenuation and submix sends (over distance).

Currently, to audition these things, you have to test it in game.
Would be lots faster if you just had a slider that you could tweak and that would emulate attenuation, submix sends etc, along with whatever logic you have set up inside the metasound, to be controlled by distance.

-Naming modules (weapon punch, weapon tail etc).
-Naming module inputs (for instance the stereo mixer inputs to input1: weapon punch, input2: weapon tail etc)

-Have the ability to have a send on submixes.
Let’s say I send all my weapon sounds to one submix called “WeaponsFire”.
I then want to apply reverb to all of them.
I would then have a send knob on the submix “WeaponFire” and send an amount of that into another submix called “Reverb”, that had a convolution reverb on it.

I am not simply talking about routing the output of one submix to the input of another, but sending the audio from one submix to another, like you would in a DAW (reverb sends). This of course should be able to be controlled in BP.

A way to the reverb tail positioned correctly in game.

I sound that plays for 0.5 seconds.
The sounds attenuation, has a “send over distance” to send a submix which has a convolution reverb inserted on it.
The convolution reverb has a tail of 1.5 seconds.

I play in the editor and hav ethe 0.5 sound play on my right side. As soon as the sound has stopped playing I can hear the reverb tail. I then move my head and the reverb tail follows my head, instead of being located at it’s original location.
This makes reverbs with tail longer than 0.5 seconds or so, unusable.

Currently, if you have a short sound playing in your left ear and it’s attenuation has a submix send on it, and the submix destination has an impulse or algo reverb on it,

A way to handle large sets of samples. Currently you need WavePlayer for every individual sample, that makes it hard to program e.g. sample-based musical instruments, or even just layered SFX.