Hey guys, I’m not 100% sure how FMOD’s integration deals with multiple listeners, but the vanilla UE4 audio engine does support multiple listeners. The way UE4 audio deals with 3d audio is that sounds will spatialize relative to the closest listener. I believe this is the behavior of multiple listeners with FMOD too, if I recall when I programmed with FMOD in the past. This makes sense – otherwise, you’d get crazy double-triggered audio as it played 2 times for each listener.
If you peruse, for example, FAudioDevice in AudioDevice.cpp, you will see that we have a Listeners array that has at least one default listener.
The way UE4 supports multiple listeners for local split screen is the following. The underlying audio device code assumes a single primary listener (0th index in the listener array) but positions sounds relative to their closest listener. In other words, a sounds absolute emitter coordinate (absolute relative to the map coordinates) is used to find the closest listener. Then the sound is “rebased” relative to the closest emitter – all it’s distance attenuations, spatialization, etc, are computed relative to that emitter. But the underlying audio engine doesn’t know or care that it’s actually spatialized relative to a different listener than the primary listener. I didn’t design this system, but I think it’s actually pretty clever.
If you know C++, the code that does this rebasing is in ActiveSound.cpp:
// splitscreen support:
// we always pass the 'primary' listener (viewport 0) to the sound nodes and the underlying audio system
// then move the AudioComponent's CurrentLocation so that its position relative to that Listener is the same as its real position is relative to the closest Listener
const FListener& Listener = AudioDevice->Listeners[ 0 ];
int32 ClosestListenerIndex = 0;
if (AudioDevice->Listeners.Num() > 0)
{
SCOPE_CYCLE_COUNTER( STAT_AudioFindNearestLocation );
ClosestListenerIndex = FindClosestListener(AudioDevice->Listeners);
}
const FListener& ClosestListener = AudioDevice->Listeners[ ClosestListenerIndex ];
// SNIP
// if the closest listener is not the primary one, transform CurrentLocation
if( ClosestListenerIndex != 0 )
{
ParseParams.Transform = ParseParams.Transform * ClosestListener.Transform.Inverse() * Listener.Transform;
}
The multiple listeners are automatically updated based on the “viewport index” – i.e. the the viewport on the world. Each viewport automatically has a listener. The code that does this is in GameViewportClient.cpp:
uint32 ViewportIndex = PlayerViewMap.Num() - 1;
AudioDevice->SetListener(ViewportIndex, ListenerTransform, (View->bCameraCut ? 0.f : GetWorld()->GetDeltaSeconds()), PlayerAudioVolume, PlayerInteriorSettings);
Where SetListener is implemented as:
void FAudioDevice::SetListener( const int32 InViewportIndex, const FTransform& InListenerTransform, const float InDeltaSeconds, class AAudioVolume* Volume, const FInteriorSettings& InteriorSettings )
{
FTransform ListenerTransform = InListenerTransform;
if (!ensureMsgf(ListenerTransform.IsValid(), TEXT("Invalid listener transform provided to AudioDevice")))
{
// If we have a bad transform give it something functional if totally wrong
ListenerTransform = FTransform::Identity;
}
if( InViewportIndex >= Listeners.Num() )
{
UE_LOG(LogAudio, Log, TEXT( "Resizing Listeners array: %d -> %d" ), Listeners.Num(), InViewportIndex );
Listeners.AddZeroed( InViewportIndex - Listeners.Num() + 1 );
}
Listeners[ InViewportIndex ].Velocity = InDeltaSeconds > 0.f ?
(ListenerTransform.GetTranslation() - Listeners[ InViewportIndex ].Transform.GetTranslation()) / InDeltaSeconds
: FVector::ZeroVector;
Listeners[ InViewportIndex ].Transform = ListenerTransform;
Listeners[ InViewportIndex ].ApplyInteriorSettings(Volume, InteriorSettings);
}