When 4.16 preview was released, the notes on the forum said this about the new audio engine:
‘The new Unreal Audio Engine is available for testing on PC, Mac, iOS, and Android.’
However in some other posts I think I saw that it’s only really been tested a bit on windows so far, and my own attempts to activate it on my mac via the ini file settings change results in a crash when starting the editor.
So I was wondering whether it was supposed to work on mac yet, and if not, whether it is planned for 4.16 or not till a later engine version.
Yes, the non-PC back-ends were not implemented by me and definitely experimental/early-access as they’ve not been tested except by the contractor who implemented them. They were not part of the normal QA testing
They’re really “first passes” on the back ends and we didn’t have enough time for 4.16’s release to properly test. We have one dedicated audio QA and he was making sure the old audio engine wasn’t broken and regressing while testing that at least 4.16’s PC-backend of the new audio engine was working.
However, I am currently go through the already implemented back-ends and getting them working.
Apologies for the inconvenience! It’s a tricky rolling out a new audio engine on a massive engine used by so many games/licensees with so many platforms.
Thanks for the info and no need to apologise at all, it was only the information available at the start of the 4.16 preview thread that confused me a little, and I do have a windows pc I can use in the meantime.
Now that 4.16 is out proper I note that the final 4.16 release notes still made it sound like this stuff is supposed to work on macOS! Other platforms are more clearly identified as not really being ready yet in the release notes. Given that the errors I have on the mac are about missing files, I assume it doesnt actually work for anyone on the mac at all, or have I made a bad assumption? If it doesnt work for anyone on macOS wouldnt it make sense to change the release notes?
SEGV_MAPERR at 0x0
Audio::FMixerSubmix::DownmixBuffer(int, TArray<float, FDefaultAllocator> const&, int, TArray<float, FDefaultAllocator>&) Address = 0x124d47baa (filename not found) [in UE4Editor-AudioMixer.dylib]
Audio::FMixerSubmix::ProcessAudio(TArray<float, FDefaultAllocator>&) Address = 0x124d1aa47 (filename not found) [in UE4Editor-AudioMixer.dylib]
Audio::FMixerSubmix::ProcessAudio(TArray<float, FDefaultAllocator>&) Address = 0x124d198dd (filename not found) [in UE4Editor-AudioMixer.dylib]
Audio::FMixerDevice::OnProcessAudioStream(TArray<float, FDefaultAllocator>&) Address = 0x124d17bd2 (filename not found) [in UE4Editor-AudioMixer.dylib]
non-virtual thunk to Audio::FMixerDevice::OnProcessAudioStream(TArray<float, FDefaultAllocator>&) Address = 0x124d1bdb0 (filename not found) [in UE4Editor-AudioMixer.dylib]
Audio::IAudioMixerPlatformInterface::Run() Address = 0x124d0e6ce (filename not found) [in UE4Editor-AudioMixer.dylib]
FRunnableThreadPThread::Run() Address = 0x108740b90 (filename not found) [in UE4Editor-Core.dylib]
FRunnableThreadPThread::_ThreadProc(void*) Address = 0x1086fcd00 (filename not found) [in UE4Editor-Core.dylib]
_pthread_body Address = 0x7fffb87859af (filename not found) [in libsystem_pthread.dylib]
_pthread_body Address = 0x7fffb87858fb (filename not found) [in libsystem_pthread.dylib]
thread_start Address = 0x7fffb8785101 (filename not found) [in libsystem_pthread.dylib]
At this stage the new Unreal Audio Engine has not been tested on anything but PC, so the release notes are likely errata for now. But we’re working diligently on the backends and we are excited though because all of these features are part of multiplatform code.
For what it’s worth, the modular and granular synth stuff is working fine for me on El Capitan 10.11.6 with the latest 4.16 release and either AudioMixerCoreAudio in the .ini or the -audiomixer switch on the command line.
Sorry that isn’t much help, but at least it answers the question of whether it works for anyone on Mac at all! My biggest problem is that it doesn’t choose the system default output device, defaulting instead to the Mac’s built-in audio (which I’ve asked about in a different thread), but what’s been said in this thread makes me think that some Mac-specific options for that might arrive in the future.
Are you on Sierra, Steve? Might be worth trying on ElCap if you have an old bootable backup lying around. I’d be curious to know if upgrading to Sierra will scupper the 4.16 audio stuff…
Thanks, that is interesting to know. I am on Sierra. I dont think I have the opportunity to try again with El Capitan at the moment although I did have it on a drive on this machine at one point I think I wiped it recently.
Just remembered that I have access to a mac mini that might be on El Capitan - will investigate this over the weekend, might even take a look at relevant source code to see if its a trivial problem.
I tend to be around a year late to the party with MacOS upgrades - takes ages for drivers, DAWs and plugins to be updated/validated/proven, and I’ve become paranoid about it through bitter experience. If I get a chance to do a test upgrade on an external disk any time soon, I’ll report back on whether 4.16’s audio engine breaks!
Oops, went on a wild goose chase setting up the mac mini with UE4, only to realise it doesnt support metal so I cant run recent versions of UE4 on it anyway! But I did find my hard drive with El Capitan still on it for my main machine, so I am presently going down that route to see if I can understand the problem better.
Good stuff! By the way, after every few hours of happily using the new audio mixer I invariably get a crash when shutting down the editor (save then Cmd+Q); crash report is similar, though not identical, to what you pasted above. So it seems like it’s working for me just by the skin of its teeth…
I think this is caused by my Focusrite 6i6 audio device. UE4 tries to use it because I completely butchered the driver/kext for the onboard sound so no apps that do not honour the default audio device settings try to use that one instead of my USB focusrite device. But it seems the new audio stuff in UE4 cant handle the number of channels this device presents.
I will try to find a workaround for now but it feels like this area needs improvement on the UE4 side.
Thanks for doing the test! I wouldn’t have been able to test with Sierra for another week, so that’s handy to know. It’s probably time for me to upgrade anyway. So yeah, seems like audio device handling is very much a seat of the pants affair at the minute, and the ability to configure this (ideally in an ini, before launch) would solve both my aforementioned routing convenience problem and your crash-out problem.
I tried recreating my plists in /Library/Preferences/Audio, as they referenced about 15 different hardware interfaces that I’ve used at one point or another but no longer own, but it didn’t solve my problem; might solve yours, however? I wonder if you can lie in the Device Settings plist about the 6i6’s channel-count? A very inconvenient workaround (if it works) but it might get you up and running… Good luck!
I looked at some source code and my brain exploded! I will keep poking around though, and if I dont figure out the issue with source channels then I will have a look for ways of changing which device it chooses to use since hopefully that part of the code is a bit easier to wrap my head around.
I use the very same audio device with my windows PC & UE4 with new audio mixer without this issue so I think that somewhere the mac-specific code is providing incorrect information about this devices capabilities, or the way the driver/os reports on this stuff is different and not being taken account of. I only use it for standard stereo output and many of the output channels are just virtual ones for use with DAW’s, S/PDIF that I dont use etc. But since I dont really understand what the wording ‘source channel’ actually means in UE4 at this stage of init, and the error is saying only 2 output channels, I’m still missing even some basic pieces of the jigsaw. But certainly as far as my macOS is concerned, despite the device showing 6 ins and 12 outs in the audio control panel, the speaker setup is configured to be simple Stereo with the Left front and Right front speakers set to go to monitor outputs 1 and 2 on the hardware. Perhaps UE4 is picking this bit up properly which is why the error message says 2 outputs, but I dont know where it is getting the 12 from, some other part of the code that see’s all 12 outputs instead of just using the stereo os speaker setup perhaps.
I am slowly becoming more familiar with some of the source code. I have established that there is code there which is successfully determining which audio device is set as the default in macOS, I have not yet established why this information isnt being used properly. I may as well spend a bit more time on this today, since if I can get that bit to work I can probably use soundflower 2-channel virtual device as the audio and then use soundflowerbed to pass the audio from that to my actual sound device, getting round the problem with the Audio Mixer not being able to downmix to a device that has too many streams for the current UE4 code to handle (>8, which is the cause of the crash I’ve experienced).
OK the issue with default macOS audio device not being used is caused by InitializeHardware in AudioMixerDevice.cpp. It explicitly sets the device array index to be 0 (unless DefaultDeviceName is set or VR audio is being used). The default device does seem to be 0 on windows but this is not the case on the mac, and this function isnt making use of DefaultDeviceIndex which is set in the core audio-specific code.
I dont have a proper fix for this at the moment but I know the index of soundflower on my mac so I set that in code instead of 0 and now I can listen to the sounds of the synth on my mac, wahey
OK I believe I have a proper fix. I think the method used is present in the other platform-specific mixer code already so I suspect it wont break anything for non-mac users but I havent actually checked.
AudioMixerDevice.cpp (line 115 on 4.16 branch, line 121 on current master branch):
Should be: AudioMixerPlatform->GetDefaultOutputDeviceIndex(OpenStreamParams.OutputDeviceIndex); // Default device
Like I said before, this problem didnt show up in windows because it seems the default audio device is always first in the array on that platform and all the GetDefaultOutputDeviceIndex is doing on windows is returning 0 anyway, but this is not so for every platform.
Please try to get this into 4.16.1, sorry that I havent learnt how to do github pull requests yet so am using the forum instead.
Hi Steve, sorry for the delay in responding - been away. Wow, great detective work! So have you been able to rebuild the engine with this change? Is that even possible?