UE4 Audio system rewrite status ?

Just wondering if UE 4.14 will be getting sound spatialization through Oculus Audio SDK for Android / Gear VR.

Thanks

Nope, not yet. That work depends on the new audio engine coming online for Android, and that hasn’t happened yet.

What this audio system exactly gives? opus files and 24bit wav files support? or?

The Oculus Audio SDK will work with the new audio mixer (it’s already implemented for PC). We need to get the android backend implemented with it and link to an Oculus Audio SDK DLL for android.

The multi-platform audio mixer is being actively worked on and will hopefully be in a state for preview testing by 4.15.

There is a number of threads on the topic in the forums.

The TL;DR is that it’s a multi-platform mixing layer so that we can have better platform parity with our features (e.g. 3d audio is not supported for android now, etc). It’ll also enable us to do more exciting things like proper DSP-graph support, source-effects, etc.

Issues like file formats are not being addressed yet but we’ll be in a better position to get better support for more flexibility with file formats.

Is it up to Google?

Shouldn’t Android 7 have that since they need it for Daydream?

Thanks

P.S. Never mind, somehow the 2 replies above mine didn’t show when I was typing in my questions :o

Hi Minus_Kelvin,

I noticed this comment was made a little while ago and am hoping it is still valid :slight_smile: Do you think spatialized audio for mobile VR will be available in 4.15 still?

I keep my fingers crossed too as it’s just unbelievable :frowning:

Apologies for not being clear. The multi-platform audio mixer will be ready for preview on 4.15! HOWEVER, that doesn’t mean it will have all the platforms implemented.

To be ultra-clear and possibly over-didactic…

UE4 audio engine is hierarchically structured as follows:

[UE4 Game Code Layer]: BP, Animation, gameplay, etc.
[UE4 Audio Objects]: Audio Components, USoundBase, USoundConcurrency, etc.
<<<audio thread/game thread boundary>>> UE4 data is copied to audio thread data, events are messaged, etc.
[Multi-platform layer] Active sounds, SoundCue evaluation, Wave instances
[Platform Layer] Sound Sources API/Mixing, Sample Rate Conversion, Device Interaction, Buffer Decoding

At this point, each platform splits to a deep implementation for each platform, which use the following APIs:

[XAudio2 - PC/XboxOne]
[Ngs2 - PS4]
[CoreAudio1 - Mac]
[CoreAudio2 - iOS]
[OpenSLES - Android]
[OpenAL - Linux/HTML5]

CoreAudio is listed twice because our implementation for iOS and Mac are actually different!

Also note that the “audio thread” exists to decouple somewhat heavy operations like sound cue evaluation, wave instance priority sorting, etc, from the game thread. Also note that there’s another audio thread in the actual platform audio device that does the DSP mixing (what I call the “audio render thread”). This dual-thread design is analogous to the RHI (rendering hardware interface) thread and actual rendering thread. The audio render thread is obscured on most platforms within the platform’s audio API but will be explicit and handled by us in the audio mixer. For total clarity on the threads used in audio, we also use the UE4 task manager for real-time decoding (which can scale its thread usage dynamically).

So, the problem with the above architecture is that many higher level features heavily depend on the platform APIs supporting them. This includes anything related to actual DSP operations (mixing, DSP effects like reverb/eq, pitch-shifting, spatialization, etc). In some cases, not all platforms actually support all the features of other platforms. Furthermore, any new platform that comes online has a significant amount of audio work to be done to get feature parity and each new platform makes the tech-debt problem worse and worse. Writing new and exciting features require re-implementation on every separate platform. This can be a non-trivial amount of work: a 3 day feature to do on PC might take 2 weeks or longer to re-implement on all the other platforms (if it’s even possible).

So the solution to this tech debt is that we need to remove as much dependency on platforms as possible. The “audio mixer” replaces 95% of the platform code by performing the majority of the functions that these APIs are dealing with. The only thing that the platform APIs have left to do in the audio mixer is query audio device capabilities, deal with audio device change notifications (for device hot swapping), and deal with submitting mixed audio buffers to the audio device. This is a significantly reduced amount of platform dependent work. Obviously new platforms will be significantly easier to implement. We’ll also have aesthetic parity.

So, now that you hopefully have a clearer idea of what it means, I can give you some clarity on the current status.

For 4.15, I’ve managed to get the audio mixer at full backwards compatibility (including a new hand-rolled multi-platform reverb and EQ written by myself).

We’re actively testing this code with our internal projects (hoping to maybe ship one of them with the audio mixer to fully dog-food it) and I’m working on preparing a GDC demonstration of some of the newer features this code will be able to support (e.g. 3rd party effects, project level effects, dynamic DSP graphs in a new submix graph editor, real-time synthesis, etc.

It’s our hope that for 4.16, which is after GDC, we’ll have the major platforms implemented (if not all) and most of the issues worked out.

@Minus_Kelvin

Thanks for the detailed breakdown!

Could you please answer the particular question to the point - should we expect Gear VR (Android) HRTF spatialized audio with Oculus Audio SDK (with all functionality it provides) to be fully functional and production ready by UE 4.16 ?

The Oculus Audio SDK is functional in the audio mixer right now (that’s what I mean by “full backwards compatibility”. It supports ALL features).

That means the Oculus Audio SDK is working in pure MULTIPLATFORM CODE. That also means that all we need to have Oculus work on X platform (when that platform’s back-end is written) is to simply load the Oculus Audio SDK DLL for that platform since the Oculus Audio SDK API is itself multi-platform.

So yes. If we implement Android by 4.16 (which is our goal), it will have Oculus Audio SDK support (assuming we can load the DLL).

EDIT: to be even more clear. Our android implementation does not currently support fundamental features like pitch shifting and normal 3d audio (non-HRTF). Implementing the back end for android (and any platform) will automatically get all audio features for that platform.

Sounds like a plan. I am keeping my fingers crossed as it simply impossible to do any work for Gear VR without spatialized HRTF audio :frowning: Btw, that would be .so, not .dll, for Android :wink:

Haha… can you tell I don’t develop on android much? Another major reason for the audio mixer is that I can do bad-*** features without having to be an expert on 9+ platforms.

I’m not sure how you can do anything without any spatialization in general on android. It’s all 2d!

I don’t see any emojis, so I assume you aren’t kidding. While Android is naturally 2D and doesn’t need any HRTF audio, Gear VR is … 100% VR. So spatialized HRTF audio is a must-have thing there. So, assuming you aren’t being sarcastic, Gear VR is this: OK | Oculus

That was a statement of agreement. Non-sarcastically. And I was extending the sentiment further. Our Android audio (with or without Gear VR) doesn’t have even basic 3d audio (i.e. normal 3d panning) let alone HRTF. It also doesn’t have pitch shifting! I should point out that the reason isn’t our implementation’s fault, it’s that the library we’re using, OpenSLES, fundamentally doesn’t support it in the profile (phone) we’re building Android for.

From: https://www.khronos.org/registry/sles/specs/OpenSL_ES_Specification_1.0.1.pdf

The reason we build for Phone profile is a decision outside of my domain, but I believe it’s to maximize compatibility for UE4 for android devices. Most domains relevant to UE4 on Android are handled outside of low-level Android API restrictions in UE4. Audio is one of the exceptions (which the audio mixer will fix).

I appreciate the link to oculus’ gear vr API :stuck_out_tongue: Although I don’t develop on android personally much, I do know about it.

Ah, got it. Thanks for further explaining it. Well, I keep my fingers crossed for 4.16 :slight_smile: Meanwhile I’ll try rolling back to older FMOD and Audio SDK (the ones that worked with 4.12 ).

Just wanted to say, super excited after seeing the GDC 2017 talk on the new audio engine to play with it!! Thanks for the hard work here!