I had a blueprint in the marketplace a while ago called Adaptive Audio Occlusion. It was started before audio occlusion was implemented into the engine, and released just after. It was rough, and I wasn’t happy enough with it in it’s released form. I’ve recently begun working on a newer audio occlusion blueprint, instead of focusing on diffraction and automation, I’m focusing on providing a realistic use case product for projects. Here are the two things I focused on for this blueprint:
• “expanding” audio sources as it relates to audio occlusion
• occluding through several user-specified trace channels with hierarchy
What does this mean? Here’s a video (sorry for the aspect ratio):
The trace points themselves are able to be edited to your needs. Here are some images of different scenarios for sound traces:
This is all meant to work with your audio in the game. The sphere size is automatically created by using the minimum attenuation extent, and the blueprint works along side attenuation settings such as “attenuate with LPF” based on distance. This blueprint would take the place of in-game ambient sounds. You would drag it in, and assign it a soundcue. Pretty painless.
I’ve also been tweaking the blueprint to be minimal on performance, and I think it’s going pretty well. Here’s an image of several blueprints with dozens of traces and no noticeable effect on performance:
If anyone has any questions or comments, I’d love to hear them!