I’ve been trying to test out the new AudioThread class in 4.13 so that I can incorporate it into my synthesis program.
Currently, whenever I generate a sound; it tanks the framerate. I have added an AudioThread into the class, but I’m still getting the same results.
Diving into AudioThread’s Code I come across this:
else if (GIsEditor)
UE_LOG(LogAudio, Warning, TEXT("Audio threading is disabled in the editor."));
So I can’t tell if my code is even working in editor? Just Why?
Anyway, checking the tooltip for GIsEditor says that we can use GWorld->HasBegunPlay, but does not say how or where to use that.
How do I get the AudioThread running in Editor for testing purposes?
This version of the audio thread feature doesn’t run in threaded mode for the editor since the editor supports dynamic modification of uobjects (i.e. write and read). Audio thread mode will only be enabled for non-editor builds (e.g. running in -game or packaged game). It is currently checked in as disabled in the BaseEngine.ini file since we ran into some last minute bugs/issues with the feature before the 4.13 deadline. To allow audio thread to run in (non-editor builds), then make sure UseAudioThread=True is set.
Thanks for the quick response, and the warning about BaseEngine.ini & UseAudioThread=True. I would never had thought to look there.
Is there documentation on which functions need to be in the GameThread and those the need to be in the AudioThread? I’ve been bumping into check statements that seem to require one thread or the other. I’ve had to rewrite some of my method calls because it would trace down to needing to be in both threads at some point, but not indicate where I needed to switch to the other thread.
Unfortunately, there isn’t any documentation. I’m not even sure what that documentation would look like.
Any function that accesses data that is needed on the audio thread needs to use the new audio task pattern (which you should now see everywhere) to copy data to the audio thread. Basically anything which plays a sound, sets a sound parameter, uses audio components, will need to be using tasks.
We tried to make things private that were formally public to help avoid these sorts of problems and if you’re not doing anything super custom, it should be fine. Unfortunately, this is a really difficult change since there’s about 20 years of code that was written with most audio code public.
So now that 4.13 is released, do we still have to cook a build every time we want to test something in AudioThread. Or can we test in editor now?
No, unfortunately, the audio thread code was intentionally chosen to run single-threaded while running the editor since the audio engine is using audio data which can be written to by the editor.
We’d like to refactor the audio engine so all UObject/UAsset data is outside the audio engine and audio thread just operates on proxy objects, but that would have taken much longer (aka a complete rewrite).
If you’re interested about engineering resource priorities w/ respect to audio tech, but I’m currently working on a complete rewrite of the audio engine backend to remove our current deep dependencies on platform DSP APIs. Once that’s done, I may return to audio threading question and see if we can make everything async.