AudioMixerSource doesn't work with Media Framework

Sorry you’re having so much trouble with the Editor. We actually just started an initiative to get the macOS Editor stable and to feature parity with Windows. It will take a while, but we’re actively working on it.

Could you point me to the commits that make the required changes in AudioMixer?

Also, MediaSoundWave has been removed and replaced with a procedural sound component. I may not even need your changes, but I’d like to take a look anyway.

Here you go

Right, I’m aware - which is why i haven’t bothered to clean this up much or at all. Also unfortunately some changes to AvfMedia were submittted to the 4.17 release branch which I never bothered to merge since they weren’t relevant to us.

Hey!

These changes to the AudioUnit backend look good and we’re in the process of checking these out now. The multi-suspend and div by zero case on init are particularly concerning. Surprised we missed these during testing, I’ll see if we can add more rigorous mobile testing for audio. Right now QA is having to do double-duty testing (old and new audio engine on 9 platforms) and think this just slipped through the cracks.

I have a question on the change: I’m not a iOS expert, but did you run into the case where the callback requested bytes change between callbacks? The docs imply its possible but we weren’t able to find a case where it happened. It would be useful to know how you triggered that case so we can figure out how to deal with it. Also, your code to handle this doesn’t deal with buffer truncation. If we don’t fully consume a generated buffer in a given callback, I’d expect to need a loop here in the next callback and then call ReadNextBuffer() once we fully consume the previously generated buffer if we don’t have a submitted buffer ptr or we’d get underruns/discontinuities.

We appreciate the work and apologize for it not being perfect out of the box yet.

We opted to send it out in 4.17 under experimental vs waiting until 4.18. It’s not internally exercised (or fully tested really) as we’re not using this brand new backend on any of our projects yet. Of course, this is why it’s released as “experimental”. UE4 releases are constant and juggernaut-like and as you can imagine, its hard to roll out a huge system refactor (especially when it’s just one programmer working on it while also supporting 6 internal games and many licensees) because the work spans many releases. We also prefer getting works-in-progress out to devs and licensees ASAP so they can give feedback while its being worked on (and help us shake out any bugs). Hopefully it won’t be too many more releases before we take it out of experimental.

I’ll add you as a the github pr credit so you should get a shout out in our next release. I’m going to try and push to get these fixes in a 4.17.2 hotfix if its not too late.

Turns out AudioMixerAudioSource and AudioMixerSourceDecode are hardcoded to use USoundWaveProcedural. I was able to get the audio mixer to also support UMediaSoundWave by simply generalizing that to USoundWave although I also had to set the flag bCanProcessAsync = true on UMediaSoundWave for some reason.

Aha! Yeah, somebody else ran into this too with audio mixer. I should have realized this was possibly the source of the issue. Thanks for getting back to us after you pinpointed the issue.

In general, it makes more sense to not have GeneratePCMData virtual on the USoundWave since the whole point of USoundProcedural is to allow one to generate PCM data. In fact, it’s preferable for people not to even override GeneratePCMData at all and let the USoundProcedural class do the work for you. There’s a new callback function that you can override which just feeds audio to an output buffer.

The point of the new synth component is to make it ultra easy to make new procedural audio without worrying about the details of buffer management or even thread-safety (I have a base-class util that handles enqueuing render thread commands).

Some major gotchas with media player in shipped builds is that it runs very differently on different platforms. It doesn’t use GeneratePCMData on every platform. For example, for PS4, it feeds audio directly to the audio device rather than through the audio engine, so can’t be spatialized nor get volume attenuations in the sound class graph, etc.

For the above reasons, as I said, MaxP is refactoring his media player to use that instead.

Minus_kelvin,
I agree USoundWave appears to be a big pile of hacks - an improved design for procedural sound generation is welcome. Thanks. And for now we’re happy that we’ve achieved our goal of having 3d audio with the media framework on ios with 4.17 - thanks in part to the audio mixer.

Minus_kelvin,
Sorry, this was just a quick hack to get something running. IIrc I was still seeing errors in setting the size of the buffer (setPreferredIOBufferDuration) even after avoiding the divide by zero. That may be the root of the problem. I can’t say we’ve tested this much either at this point. I’ll let you know if we run into more problems. Obviously if it wasn’t so onerous to do UE4 development on Mac/IOS I could offer more help - and your QA would probably be a lot more productive too. So the editor improvements mentioned by gmpreussner are welcome - Please don’t forget to include the ios deployment as part of that! While I was at google we worked with Jack P and B to get the android deployment sorted - IOS needs similar love.