In Ue 5.3, I have no volume changes when I play a sound from a Metasound that uses Waveplayer stereo. I do hear the change from the content explorer if I press play on the sound.
Steps to replicate:
Have a sound, make a metasound source with a waveplayer that plays that sound. Change the sound volume from the asset inspector. Play the metasound. No volume changes.
Same question here. Anyone?
Hello there.
Yes, this is as designed. However, the design isn’t super clear, so I appreciate the confusion. The issue stems from a bit of historical baggage with how audio assets work in UE4 and current constraints of backward compatibility. USoundWave (the product of importing a binary .wav file) conflates runtime properties and binary asset data – e.g. on a USoundWave you can set volume, pitch scale/playback speed, spatialization settings, concurrency, and on and on. Basically you can ‘play’ a USoundWave in the game as if it was a single SoundCue with a single Waveplayer node. In fact, the C++ code treats USoundWave as exactly that!
The problem with this design is that if you want to use the same underlying binary asset but with different runtime settings, you’d need to duplicate the asset to give it different settings. Obviously, duplicating a large binary asset, just to modify a couple of simple settings (e.g. different volumes, etc), is a horrible thing. Thus, in UE4, there was an established pattern that sound designers working on projects of scale would pretty much never play a USoundWave directly and only play them from within SoundCues. Sound Cues provided that level of indirection where you could trivially change runtime settings without modifying the underlying binary asset.
So, with MetaSounds, just like with Sound Cues, we use the underlying SoundWave asset as a delivery mechanism to give the binary data to nodes in a MetaSound – i.e. the wave player node. We ignore the runtime properties in a way that is analogous to the way they were usually ignored in Sound Cues.
Right now, a lot of the runtime settings (e.g. volume/pitch, concurrency, attenuation, etc) don’t really make sense in the context of MetaSound graph rendering – many of the parameters apply at a different level than the MetaSound source renderer – i.e. the UMetaSoundSource itself would have concurrency/attenuation settings, but underlying, internal metasound rendering is happening at “too low” of a level to meaningfully apply many/most of the settings.
However, this is universally agreed to be a terrible UX and confusing. The plan/hope is to introduce a new binary asset that makes it very clear that it only deals with aspects of the underlying sound asset: metadata about the asset, compression and cook settings, etc. Basically, bring it in line with how Textures are handled in UE (and SoundWaves should have been handled from day one). However, this change would be significantly invasive and tricky, especially since we have a mount of features built on USoundWave.
That said, we do have a plan to do it.
Apologies for the bad UX in the meantime.
Hey Aaron! Thanks for the detailed answer, I totally get it now. Looking forward to see the new things you guys are planning for sound in UE!
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.