New Audio Features and Game Jam Kickoff- Live from Epic HQ

Ah, right. Yes, UE4 audio needs some serious love with debugging tools. I and the audio team here agree! I’m thinking it’d be part of the developer tools and a seperate window that displays information about the playing audio (specific sounds like in the current “stat soundwaves” or “stat sounds” overlays but with more options to control what you can see) but also some analysis: FFTs showing the frequency distribution, envelope followers, level meters, etc. And a real-time debug mixer so you can mute/solo sound classes, etc.

That would all be really really cool.

UE4 doesn’t really support spatializing VOIP out of the box. I worked with a licensee this year who wanted that for their game and made some suggestions on how they might approach it. It’s essentially a problem of getting a procedural voice to spatialize like other sounds. VOIP spatialization wasn’t a priority for us to support and I probably won’t be able to get to dealing with VOIP-related features until after we get our multi-platform mixer (and other related things) off the ground.

Will we finally be able to pause/continue SoundCues?

Yeah, I don’t think we have much documentation yet specifically dealing with audio implementation for VR. I’ll see if I can ping our documentation team and see if we can get something put out there (maybe some basic tutorials).

So, real briefly:

First, “regular” 3d audio spatialization will work without any extra configuration. HMD devices for VR automatically set the listener position to match what you see through the VR set. We have some code which tunes the listener position precisely, but that’s probably not something you need to worry about. If you want to use the Oculus Audio Plugin I wrote for UE4, you simply just need to enable the plugin (in the plugins dropdown). Then your project will restart and you’ll be able to see a new option in the sound attenuation settings class that allows you toggle HRTF spatialization for sounds playing with that attenuation settings. It doesn’t automatically turn HRTF spatialization on for everything since HRTF filters can have a performance impact and we’ve found that mixing the two spatialization techniques (i.e. old-fashion vs HRTF) is actually pretty effective. Certain sound-design tasks in VR are fine with a less-localizable (and cheaper) spatialization algorithm. The stereo spatialization feature I mentioned in particular plays nicely with the HRTF processing.

As for setting things up with wWise, wWise handles their audio-integration with UE4 so I’m not sure if there’s anything special you might need to do with wWise and UE4.

Awesome stuff, I’m loving it :slight_smile: Definitely a great implementation- super useful especially to the project I’m working on! Great to hear about the other LPF improvements, thanks for all the hard work!

This is the first I’ve heard of such a feature request and I think it’s a pretty good one. Good news is that it’s probably easy to support that since we do support pausing/unpausing of audio in general (when you pause the game, non-UI sounds will pause).

If you are a programmer, or have access to a programmer, and you don’t want to wait for us to add the feature, I’d go about it this way:

  1. Add a new blueprint callable function to the audio component class (“Pause”)
    note: this would really only be supportable using audio components since you’ll need a handle to the sound to pause a playing sound
  2. Create a new bool variable on an audio component (bPaused) so that child sounds can query it’s parent audio component if it paused.
  3. In the active sound (FActiveSound) update loop, query its parent audio component (if it exists, not all playing sounds have audio components) about whether it is paused
  4. If the paused, then call “Pause” on the active sound’s FSoundSource object(s).

There’s a couple of edge cases or questions you might have to work through:

How does pausing an individual sound interact with the general game-pausing feature?
Do you support pausing any type of sound (including UI-sounds)?
What happens if the sound doesn’t succeed in playing in the first place (due to voice-prioritization/concurrency resolution)?

I’ll make a jira task for this so that we don’t forget about it.

Got any tips for implementing it myself?

I find that most of the time spent working on stuff like that is figuring out where in the engine it lives, rather than the actual implementation itself (good example for me, of this, was my research into getting access to the raw sound samples from a loaded-from-disk-at-runtime sound file, implementation (which is in 's Sound Vis plugin) took a day but finding the stuff I needed for it took a couple of months of off and on research).

I’ve got a question about the occlusion feature. I’m developing something for the marketplace that would really benefit from adjusting the occlusion volume in real-time. I am actually able to decrease the occlusion volume just fine, but increasing it doesn’t seem to work. Any idea?

So this is one of the threads that were started a while ago:

Hopefully this is enough to get you started.

Not exactly sure what you’re talking about: Do you want to boost audio volumes if they’re occluded? That should theoretically work since there isn’t a clamp on the volume scale for occluded sounds in the sound attenuation settings object. However, I’d caution against setting any volume value greater than 1.0 if you can avoid it. The issue is that it gets very difficult to reason about gain stages when some are above 1.0. There is a final clamp on volume values as well to avoid clipping, which may be what you’re running into.

As for real-time adjustments to the value, you can “make” and “break” an attenuation settings struct in BP so you can use that to dynamically adjust occlusion gain values “on-the-fly”.

Looks like it should do the trick, thanks!

Looking at that answerhub and the current code, it almost seems as if the implementation has changed since that answer was posted, because internally the voice system seems to be using an AudioComponent with an internal SoundWaveProcedural.

It appears that the thing that needs to be done is to set AllowSpatialization = true (instead of false) in CreateVoiceAudioComponent() in OnlineSubsystemUtils.cpp, as well as making sure that the transform is correctly updating.

I mention the above for others that are investigating this and might a bit confused about the answerhub link, because SoundWaveStreaming is now SoundWaveProcedural, and the internal VOIP implementation seems to be pretty close already to what they were talking about in the answerhub.

Thanks for the reply. This is what I’m attempting to do. I can adjust the occlusion gain “on-the-fly,” but only decreasing it seems to work. It’s odd.

Yeah I was quite surprised that such a basic task wasn’t available in BP. Good to hear its easy to implement by myself, thanks <3

Thank for the reply Aaron :slight_smile:

Did you manage to get this working as I have been looking into the same thing and noticed that the answer was giving and class names were changed?

Didn’t get a to really look into it other than just searching through the Github.

It should be possible to find what you need via the CreateVoiceAudioComponent() function, though.