Enhanced Audio Occlusion for VR

Hey, an audio forum! Awesome!

I wanted to share what I’m working on: Enhanced Audio Occlusion.

The audio occlusion in UE4 is great, and mixed with Steam Audio is even greater. There are, unfortunately for now, still limitations. The biggest two I see are:

  1. Audio occlusion is limited to one trace channel. This means occlusion is sound dependent rather than surface dependent. Each sound has one trace and occludes to one volume and one LPF, not matter the type of object occluding it.

  2. Steam Audio adds partial occlusion, but you can’t visualize it and you can’t tweak it yourself.

So I created a blueprint that allows you to occlude sounds based on physical materials assigned to materials in the level. A wall can have a “Wall” physical material applied to it and affect all occluding sounds in its specific way. Then, you could assign a “Metal” physical material to other objects and have those objects affect occluding sounds in a different way. Also, I added a functionality to simulate partial occlusion with customizable trace points that go around the sound source. Each point stores a part of the audio’s volume and Low Pass Filter frequency. For every point blocked, it reduces the volume and applies a bit of LPF depending on the center audio’s weight and the materials between you and the trace location.

This blueprint takes every occluding material between you and the sound and combines all of them to offer even more realism.

Here’s a video demo. Would love to hear what you guys think!

Hello again. I’ve been working on this some more and managed to greatly reduce the complexity of the blueprint. Some other cool things:

• The blueprint looks for a custom physical material that includes an “occlusion percentage” and applies that to the sound. Basically regular physical materials with a new float that you can apply to any materials you want to occlude sounds with.
• The blueprint combines occluding surfaces and applies all of the values to the sound. Volume and LPF values are taken care of using custom curve assets.
• Designed to work alongside middleware like Wwise and Fmod. I actually can have this blueprint working through Wwise by feeding an occlusion value as a RTPC to drive LPF and Volume.

Here’s a simpler demo video of the blueprint combining surfaces to compound occlusion:

Cool man! Occlusion is an interesting challenge!

Are you assigning occlusion properties to the Meshes as Actors?

Thanks! It really is interesting. I’ve been messing with occlusion using blueprints for a couple years now. Having the LPF option in blueprints saved me a bunch of headache.

I use a couple of custom curve assets, one for volume reduction and one for LPF. The blueprint traces for a specific channel, then looks for a physical material on each overlap. On the mesh’s material I apply a custom physical material that has an Occlusion Amount float. That value is used with the curves and LPF/Volume is applied accordingly. So I have a global curve for volume and LPF based on a value between 0 - 100, and the blueprint is looking for custom physical materials that have that occlusion float value.

Back with another change. This one was out of necessity, but I prefer it now.

Unfortunately, adjusting a sound’s volume and LPF will affect spatial audio features used for Steam Audio. I needed a way to adjust a sounds volume after it was processed for attenuation and spatialization. Luckily, the new audio engine has submixes! I was able to cobble something together and utilize a 3 band EQ to adjust physical materials’ frequency absorption, much like using a phonon material in steam audio, but less fancy.

Here’s a video of it in action:

There are going to be pros and cons to using this blueprint in place of UE4 and Steam Audio’s sound occlusion. So far (unless things change before the official release of the new audio engine) my blueprint offers a more customizable partial occlusion solution (more or less points, weighted center), and accumulating occlusion properties as shown in the video. Steam Audio, however, has more options for occlusion material settings that I just can’t replicate.

Cool! I love the approach.

With some tuning, you could definitely come up with some very convincing occlusion filtering values!

It might be easier as a Source Effect instead of a Submix Effect. I don’t think you’ll want a Submix for every source you wish to occlude.

Thanks Dan. This is part of the issue I’m having with tweaking sends in real time in the other thread. Do source effects affect a sound before it’s calculated for steam audio’s propagation? I might be missing something, but I’m not able to even create a source effect. Here’s a screenshot of where it stops me:

sourceeffectsmissing.PNG

Here was my plan with submixes for occlusion: I’d have a submix on the soundcue with three EQ bands at around -90 dB— effectively muted. Then in the occlusion bp I’d send to three other submixes that raised each individual band to get the sound at a normal volume (low, mid, high gain submixes). Then the occlusion bp would lower the sends to each EQ band submix accordingly. This way I’d only need to assign each sound to that base submix and the occlusion bp would take care of the sound’s sends to each band-pass filter.

If I can do this with a source effect and still have it do real-time reflections in steam audio as if the sound wasn’t occluded, I’d be very happy. I just don’t know how to get a source effect to try.

You need to activate the Synthesis plugin, that’s where all of our DSP effects are.

Then for Steam Audio, don’t do Occlusion since you’re basically creating your own.

Ohhh okay. Thanks Dan.

Also source effects are computed before occlusion/verb/HRTF plugins (and spatialization speakermapping).

Pre 4.18, the source effects are done post-distance-attenuation, but in 4.18, I seperated out volume attenuation due to all other things, and volume due to distance attenuation and run the audio through their source effects pre-distance attenuation. This is way more intuitive and should help you with your occlusion work.

These are all great videos and features man and I’m super impressed with the work. Making me proud. :')

As for the practical aspect of this, in my experience, in real games, ray traces can end up being very expensive CPU-wise. If you want to ship this in a game, you’ll want to think of ways of pruning out traces and doing simple versions for some audio, more-complex versions for others, etc. Like a priority/LOD system based on distance or sound priority. I kept my UE4 native occlusion solution to be as bare-bones as possible on purpose to keep the expense low. I also did it on an “Epic Friday” off-schedule and didn’t have a lot of time to put in a ton of work on features and bells n whistles.

Thanks for the reply, Aaron!

Great news to hear about 4.18. I can’t wait to get this blueprint working with source effects.

As for performance, I’ve tried to approach that problem as well. These blueprints are scalable, with as little as 4 points being traced (top, bottom, left and right) along with center. You could also set the weight of the center to 100 and bypass any extra traces altogether, having it act more like the occlusion now.

I’ve done a bunch of profiling to try to get it as performance-friendly as I can. The problem I had for a while was that each node reading every hit and calculating the occlusion cost a significant amount of resources. Each node is its own blueprint, which gets all overlapping traces with a certain trace response and physical material and then sends its contribution to the main blueprint. Along those lines I’ve done things like:

  1. if the array of hit objects is the same length as last check (every .05 seconds), don’t bother calculating a new occlusion amount
  2. if the node cycles through enough materials to reach full occlusion, stop the loop
  3. once out of range of the sound’s attenuation, go dormant

I may do something else like an option to limit the amount of objects you want to trace through.

This now only really has the potential to impact performance if you are getting a lot of blocking hits changing all around you and the nodes have to work overtime. It’s a fun project for sure, And I really think I’m close to something that can be implemented in projects in a smart way and have a limited impact on performance.

When can I download this from the marketplace

Well, I approached this again. I was never happy with the performance and implementation with blueprints, so I figured out how to do it in C++!

This is just one class, but two versions. I have an actor class and an actor component class. The actor class version is to be used in conjunction with methods like adding an audio component, attaching an ambient sound from the level, or accessing a third party audio source actor to control parameters. The actor component can be used in blueprints that have audio components. The Enhanced Audio Occlusion Actor traces for multiple occluding surfaces, using a custom physical material just like the previous version. You can have anywhere from 1 - 12 traces going per source to help tweak performance. Though I’ve had 16 of these things around me and I was waving a couple blocking cubes all over the place and didn’t notice any change in performance.

Here’s a video:

The EAO Actor exposes three variables to the blueprint based on the physical materials it traces against. After calculating all contributions for all hit materials, it returns high and low frequency absorption floats, as well as an overall occlusion volume float. All three are from 0 - 1 and you can use those values to drive parameters from blueprint in any way you want. I’m really happy with this now. Other than using it for the ol’ portfolio, I’m not sure what I might do with it. The store is such a pain that I may just give it out.

NOTE: This project is simply to replace and expand upon UE4’s occlusion options. I am not interested nor do I have the competency to solve propagation/diffraction/reflections behavior issues. There are other way better options for that out there. I see the biggest weakness currently in a lack of multi tracing occlusion.

Thanks for checking this out!

Latest version information.

It’s now only a custom audio component. If you assign a sound to the audio component, it will automatically fade the audio per its settings. You can also leave the sound variable empty and instead expose three events (each giving respective floats from 0.0 - 1.0). These are an event for High Frequency Absorption amount, Low Frequency Absorption amount, and Volume Multiplier. You can just use those events to attenuate third party solutions like Wwise or Fmod.

Speaking of that, through C++ I also found a way to highjack Wwise’s built-in occlusion/obstruction. Using my Enhanced Audio Occlusion, I perform the tracing and calculate the occlusion values, then feed that into the (slightly) modified AkComponent and use Wwise’s occlusion on a sliding scale as opposed to its all-or-nothing occlusion as it currently stands.

Okay, I think I’m finally done with this thing. Here is a features video. Would love to hear any feedback! First half of the video is the features, and the second half shows the features that can help with performance.

Thanks!

Great video and narration - also great job. I’d def use this, if I wasn’t hellbent on cooking as much sound stuff as possible myself. I still might use this when I’m ready to do a real high-end sound-experience - it seems nice to use. Really cool.
What are you doing next?

Thanks for the kind words, ArthurBarthur.

I’m just continuing on, most likely not REALLY finished with this plugin. Mostly fun to just learn techniques and new skills while I make stuff to help me with sound design.

Hey, I’d be very interested to know how this compares to Nvidia VRWorks from your point of view. I can’t say I’m experienced enough to form a proper comparison myself.

Hey MoreMusic,

To be honest, I can’t find a lot of information on VRWorks as it pertains to occlusion. My plugin is centered on occlusion of a sound. I’m not interested in 3d audio, reverberation, spatialization, or modelling environments for this project.

I’m only focused on “how big is the sound source, how many things are blocking the sound, and what are all of those things?”

I’m sure there will be commercial solutions to these questions soon, if there isn’t one out there right now. This next phase in immersive audio technology is still very new. Overall I personally am seeing less of a concentration on direct sound occlusion simulation and more of a focus on natural audio physical behavior given a room shape and the objects contained within.

On another note, I have a question for anyone that might know better than me: @dan.reynolds @Minus_Kelvin

I’ve recently utilized Async traces for my plugin, using Async Line Trace by Channel:

https://docs.unrealengine.com/latest…l/2/index.html

I’ve seen a huge improvement in profiling, since none of the traces now happen in the game thread. The delegate function called only when the traces have information is invaluable for staying thread-safe and not having to waste resources checking validity. The traces may be a little less predictable, but throwing 30 to 40 sounds in the level with a total of around 500 async traces has worked beautifully so far. Is this a valid approach? What are the drawbacks for something like a lot of async traces running in other threads?

This is cool but it’d be interesting to see the CPU impact of doing so many traces per sound. Are these traces being done on the game thread or are they using async traces? Is this a BP system? If it’s BP, then these traces are probably on the game thread which is going to likely have an issue with a shipping game.

Just some food for thought :slight_smile: