I am at the verge of completing a 3D audio engine that is implemented with C++ (for my master’s thesis) however I have no GUI nor an actual game to be using with it.
As such, I was hoping to use it on the Unreal Tournament made with the UE 4.3 (posted on github).
So what I want to do, is to replace the current audio engine of UE with mine and benchmark it for different algorithms.
However I don’t have any experience with UE 4.3 (I just went through simple UE C++ tutorials).
So I where to start? How can I replace the current audio engine?
When a player is playing on a map, all I need is the following:
Coordinates of the listener (player0’s coordinate),
Coordinates of the sound source,
Path of the sound file.
Thanks in advance,
PS: On the current UT version on github only has simple item pickups and bots are not even shooting back yet.
I’ve not looked at the audio subsystem at all yet, but I would say you would probably want to start here UnrealEngine/Engine/Source/Runtime/Engine/Public/AudioDevice.h. As well as UAudioComponent. Those should be a good jumping point to dig deeper in the the audio abstraction and higher into the game layers. If you’re looking to use UT as your test case, you’ll probably replace the implementation of AudioDevice and at minimum you’ll need to modify UAudioComponent.
If you were to use your own components and subsystem then you could add your new audio system as a plugin. However, you’d obviously have trouble using the existing UT code/assets without some conversion step or something. It seems like hacking up the existing audio framework might be the way to go?
Again, I only dug around enough to point you at those files/classes. I don’t really know what I’m talking about, I’d need to dig deeper, but that is where I’d start myself.
Either way, keep your progress posted in the work in progress sub, I’d love to see how you make out.
As Kyle said, AudioDevice is the entry point to the audio system. Wholesale replacement of the audio system isn’t well suited to plugins at the moment though that is something I’d like to get to eventually. That said there are a few approaches you could take:
Replace the audio system end to end. This basically means modifying engine code and replacing the places that interact with the Audio Device and pointing it to your own.
Put your audio system side by side with the existing one. This is the approach that AudioKinetic has taken with their wWise integration. The key entry points (ticking the audio device and setting the audio listener) call in to both the existing and the AK audio devices and they provide all of their own classes and do not use SoundWave, SoundCue, or AudioComponents and instead have things like an AKAudioComponent (I believe, I’m going off memory here I haven’t reviewed the implementation recently nor do I have a copy handy for reference)
Provide a new audio device subclass. This would involve making a subclass of FAudioDevice and providing implementations for the Device, SoundSource and SoundBuffer such that they run through your audio engine and then push to the hardware. You can look at how these interfaces work by looking at each of the platform audio devices (XAudio2, Core, ALAudio, etc.). You would then change the AudioDeviceModuleName to point to your audio device.