Thoughts about having more advanced sound features in Unreal Engine?

It was suggested that I should post this into this sub-forum, because it is kind of a suggestion. This was originally posted in the general forum.

Binaural audio - I’m sure every has head the barbershop demo on youtube where it actually sounds like someone is cutting your hair, some games have used it tech, but I don’t often see it in new games. One common misconception is that binaural audio can only work if the listener is going to be using head phones. If the game know the players audio set up, it you should be able to deliver a great binaural audio experience as long as you have at least 2 speakers. Binaural is going to become more important when you are working on a horror game, or a VR experience like the Oculus Rift, where more immersion makes for a much better experience.

Precomputed Wave Simulation for Real-time Sound Propagation, or baking sound simulation for a game. Really just watch the Half Life 2 clips at the end of the video below, there’s a drastic difference there. Basically this tech simulates how a sound bounces and reflects in an environment. I’ve played a few games that change sounds the further you are away to make sounds act a bit more accurately (the sniper in halo 3 is the only real example I can think of), but the sound doesn’t change drastically when you enter a large room or step outside, most modern games use music to set these moods, but a more immersive experience should rely more on the actual sounds in an environment giving players more cues about the area they are in.

Would you like to see these or similar features in Unreal Engine 4? Why aren’t more games using this type of tech? Do you know any games that do?

I’d like to see audio become more of a first class feature of UE4 rather than being left to middleware like FMOD and WWise for anything beyond the basics, but I think the things you’re talking about are luxuries when, from what I can tell browsing the docs, the audio engine lacks for more fundamental DSP (eg convolution reverb, or a compressor/limiter). The amount of processing power you can budget for DSP is typically extremely limited, so you’ve got to be super smart about where you’re spending it.

Particularly on the binaural front, it is far from a solved problem to do real time low latency processing with HRTF convolution to anything close to the accuracy of your real world auditory spatial acuity. It wouldn’t make much sense to invest a lot of time on it right now when we know Oculus are actively working on moving that forward, particularly because customising the HRIR for each user might well require a hardware solution (the University of Surrey had some success recently with using a 3d scanner to create custom HRIRs that were higher fidelity than a typical dummy head and torso).

THIS!! we seriously need evolved audio