I deleted my previous post because it bumped this to the top, but to reiterate the main idea, though this is awesome to have support for the Oculus Spatializer built in, there is still the problem that UE4 doesn’t offer a unique thread to audio so that we can have a dedicated timing clock for tempo based synchronization of multiple audio events and triggering along a musical timeline. Even if one designed a custom trigger system within Blueprints, there is no way to have precise musical timing within the engine independent from the tick and fps clock. So even though audio events will be able to utilize the 3D positioning of the Oculus plugin, there will still be a need for middle-ware when it comes to synchronizing tempo based audio musically.
Well I’m attempting to use FMOD, as described in the other thread.
I don’t know if the master source branch includes tempo based timing, as I’m not experienced enough yet to work within the master branch and haven’t heard any recent news about the upcoming audio redesign. I’d be surprised if it was included in 4.8, though delighted if it were.
I don’t recall hearing it mentioned. I’ll be compiling and running it shortly, I’ll let you know if I see it there. Hopefully Nick Whiting is following this thread and can respond in a more official capacity.
I’ve run the latest 4.8. The spatialized audio is a huge improvement. Although I can’t always tell which direction the audio source is coming from.
Unfortunately it doesn’t seem to render in the GearVR properly. I get a strange rainbow effect.
It once worked and the performance improvements were really impressive. I’m going to try the promoted branch tonight. Hopefully someone from Epic see this but I don’t blame them for not wanting to support the master branch on the forums.
Really excited for 4.8 now–you guys are awesome! Also, my mind is blown by Vlacho’s talk. Does anyone know if some of those features not covered by Nick W would be considered (i.e. normal map mipping/geometry specular aliasing, anisotropy, stereo reflections, etc?) I believe Nick said that UE4 has its own methods for shading, but would like to hear more on this. Is Epic looking at achieving the same features/level of quality using different techniques?
I would be interested to hear from a UE4 dev on Brian Hook’s comment regarding their integration with UE4 only working on Windows and XAudio2. Will the Oculus integration in 4.8 be compatible with Android, for deployment to GearVR?
Yeah, I didn’t hear back from them either. I had this exact same thing happen last weekend and got a response late Tuesday night here, which would be Wednesday there. They said that they were away for a few days around the weekend, so maybe we will hear something on Monday.
Have a good and safe journey, man. Will keep you posted if I hear anything.
I totally agree that we need to have audio running on a separate thread. Not just that, but we need to have sample-accurate timing. You can see my recent (informal!) post summarizing my current plan of attack with the new audio engine stuff. Unfortunately, getting something like this to work with the existing audio engine would be really tricky since the code was fundamentally written in a way that isn’t thread safe. Data and pointers are shared across many systems. It’s possible to do, but it would take quite a bit of time. Since we want to redesign UE4 audio and replace the existing audio tech for lots of reasons anyway (i.e. not just because of the threading problem), we decided it’s not really a good strategy to do all that work only to just rip it out shortly afterward.
But yeah, a primary goal of my new stuff is to support proper event timing – not just for music, but for pretty much all audio-related things. As I’m sure every sound designer would agree, timing is really important in audio.
My first game gig was writing procedural music (I was one of the composers on Spore) so I deeply appreciate this issue and want to make UE4 the best game engine for procedural and interactive music and sound design.
I implemented the Oculus SDK for our GDC VR demo this year – it was a trial-by-fire feature for me my first couple weeks of working at Epic.
“Oh hey… welcome aboard! So, we have this GDC demo coming up and uh, we would love to have Oculus’ spatialization plugin working for it… think you can do it by the end of the week? By the way, our programmers who’ve been doing audio support said they don’t think it’s possible but maybe you can figure something out?”
Since our GDC VR demo was for PC, I implemented the Oculus plugin as an XAPO (XAudio Plugin Object) in our XAudio2 code and made it an “optional” plugin. It was originally intended to be an experimental GDC-only feature for our VR demos, but it worked so well that we decided to keep it and integrate to main, even if it was only implemented on PC.
Music to my ears. I’d originally wrote “sample accurate timing” in my post that you quoted and then thought better of it and took it out, like it was asking too much from a game engine. But hell yeah, man. That’s the stuff right there.
Between reading your comments here and watching Zak’s recent twitch stream, I am super excited and inspired about what’s to come for audio in UE4. I mean, done to the degree that you guys are pointing at, this could be pretty revolutionary in the digital audio field while opening up Unreal to becoming an advanced tool or instrument for artistic expression with sound in virtual spaces.
Nice ! I understand, that all makes sense. Dude, your doin’ it man. You’re out there on the frontier forging the new path, building the future. I’m so behind you and support you in all of this. Thank you for your work and artistry, Aaron.