Real-time Audio Occlusion Simulation Using Blueprints

UPDATE: 5/18/2015

Hey Everyone,

I’m close to finished with this blueprint, tentatively called Adaptive Audio Occlusion. Here’s what it does:

FUNCTIONALITY

  • Tracks sounds in your level in realtime and can tell when you, the player, are standing behind something in relation to each sound, changing each sound’s properties to simulate the sound getting quieter and more muffled as in the real world.

TRACKING SOUNDS

  • Automatically tracks all ambient sounds in the level, activating traces when in range of each sound actor
  • Will track any blueprints with audio components in the level that you specify with dropdown menu in details panel
  • Handles spawning and destroying ambient sound actors and blueprints with audio components in real time

WORKS ALONG SIDE AUDIO VOLUMES

  • Comes with soundclass to assign to sounds you want affected by this blueprint, all other sounds (player sounds, one shot sounds) can be affected by audio volumes normally

READS SURFACE PROPERTIES

  • Add your surface types to arrays in details panel (light, medium, heavy)
  • Depending on surface type the trace hits, audio dampening and muffling occur at different intensities (you can have sounds dampen more when behind a heavy steel door and less behind a thin wall)

SAVE’S SOUNDS’ ATTENUATION PROPERTIES

  • Saves the original sound attenuation properties for each sound source and reverts to those properties as you emerge from cover

COMPLETELY IN REAL TIME! AND COMPLETELY WITH BLUEPRINTS!

Here’s a video showing the basics. The White orb is the sound source.

Nice to see audio get some love. This looks pretty nifty to me but looks like it’s just a raycast determining the volume? Please correct me if I’m wrong.

I would have thought an audio occlusion system would account for the gap between the boxes?

Please keep working on this. I’d definitely consider purchasing it and the wind over ear BP I just read from your signature.

Offtopic:
The window over ear sounds like it works better than the spatialized audio I’ve been playing with for my VRJam entry (Telegear | Devpost)

Subscribing to this thread :slight_smile:

Hey Bino, thanks for the kind words!

It is indeed a raycast, but not only does it lower the volume depending on the surface type, but it also applies a LPF which is increased by the surface type as well.

I’m not sure what you mean by the gap between the boxes. The audio returns to normal when the raycast isn’t hitting an object, so it returns to normal volume/LPF between the boxes. (still figuring out a way to save the attenuation variables to return them to the values of each sound actor).

One thing I’m proud of about this is the raycasts to the player from the sounds are only called when the player is in the attenuation range of the sounds in the level. Should be easier on performance that way.

BTW you’re project looks/sounds very interesting!

Hey Obsidiaguy,

I was referring to these gaps circled in red. Probably not worth the effort.

I quite like the raycast method. I always pictured an audio occlusion solution would look at the geometry.

I see what you mean. I see that as an audio propagation characteristic. I’m focusing (at least with this blueprint) on recreating just occlusion from different types of surfaces directly. What you pointed out is definitely something I plan to look into in the future, though.

Nice, hope to see this released soon!

Thanks Bino!

Updated the original post with latest information!

Okay, after receiving a couple questions about the areas around obstacles and the odd way it sounds when a sound in the level is being occluded, I’ve looked into Sound Diffraction, which is what I believe you were asking about earlier, Bino.

I’ve been pounding my head on all kinds of brick walls to figure out how I can recreate that effect to make my blueprint really complete, and I think I’m onto something. Check out this video:

And here’s an image to help explain diffraction around an obstacle:
waves_obstacles_diffraction.gif

The white spheres that appear are where the sound diffraction sounds will originate. So far, it’s a work in progress. I do have the left and right spheres playing the sound, to give an example of what it would sound like. I still need to do some work, but I think this blueprint will definitely have a good amount of customization and a great sound diffraction recreation feature!

Yes, a thousand times.

My personal opinion, without regard to realism, is that the diffraction should have a smoother curve. I understand this is a WIP - great work regardless, hope to see/hear more soon.

I will take a look in this nice thingy @home :3

Impressive!
Is it almost like binaural audio? Because at one time in the video I could clearly tell the position of the sound “outside” of my head.
Then again, I’m a noob when it comes to sound :slight_smile:

It’s not binaural, but I think binaural (HRTF) options are coming soon from Epic. What you’re hearing is sounds playing a little later than others, which isn’t a good thing. I’m trying to get that fixed as well.

Wow coming along very nicely.

Edit: Good to know what it’s called - Sound Diffraction. :slight_smile: thanks.

Another question :slight_smile:
Could I attach this system to a custom collision?

Say, a wall with a slit cut out in the middle that I could listen through?
I could make the collision very basic but make the slit look splintered and bespoke.

Cheers

Way cool!! Great work.

How hard would it be to adapt your system to fmod events and instances? Because I’m working on something similar and is about to start on the actual occlusion-part right now for a project we’re working on.

My “system” so far is adjusting several FMOD-parameters based on distance from the source and is continously raycasting from the srouce to the player and stops the audio when the player leaves the outer zone (ie: when the sound is completely inaudible), and does stuff such as randomizes certain parameters on each enter of the outer zone. Got a stream that uses several sources and each stream-source is has the intensity randomized based on a min/max value, for example.

We figured that we should use raycasts for occlusion as well. I am, however, in particular interested in the diffraction-part of your system. From what I gather right now is that you shift the location of the source to simulate bending, is this just a simulation of how it will sound? If so, how will you deal with propagation and things such as reflections etc?

(when) Will it be on Marketplace, and how tied-up is it to UE’s own audio-system?
Very interesting indeed. Would love to dig through it for some inspiration and tips :smiley:

I’m curious as to the CPU usage. Particularly on console systems. Any way to measure that? If now how does it perform on a PC?

Also curious about that.

OP: Updates?

bump for overhead details!

When is this coming out??