Audio - Multiple listeners for split screen multiplayer?

Hi,

I’m working as a sound designer on a game in 4.8 that features split screen local multiplayer. From what I can tell there doesn’t seem to be any support for multiple audio listeners, so only player 1 can hear 3d sound. Ideally, each player would have their own listener. At the moment we’re just having to utilize 2d sound as much as we can but we’re reaching a point where it’s becoming very difficult. I’m working with Fmod, which definitely supports multiple listeners so any mixing or polyphony issues could be handled from there.

So is there any support in 4.8 for multiple audio listeners? I’m hoping we’ve just missed something!

Thanks,
Matt

I too am interested in how to make audio work better in splitscreen, blueprints preferable.

Hey guys, I’m not 100% sure how FMOD’s integration deals with multiple listeners, but the vanilla UE4 audio engine does support multiple listeners. The way UE4 audio deals with 3d audio is that sounds will spatialize relative to the closest listener. I believe this is the behavior of multiple listeners with FMOD too, if I recall when I programmed with FMOD in the past. This makes sense – otherwise, you’d get crazy double-triggered audio as it played 2 times for each listener.

If you peruse, for example, FAudioDevice in AudioDevice.cpp, you will see that we have a Listeners array that has at least one default listener.

The way UE4 supports multiple listeners for local split screen is the following. The underlying audio device code assumes a single primary listener (0th index in the listener array) but positions sounds relative to their closest listener. In other words, a sounds absolute emitter coordinate (absolute relative to the map coordinates) is used to find the closest listener. Then the sound is “rebased” relative to the closest emitter – all it’s distance attenuations, spatialization, etc, are computed relative to that emitter. But the underlying audio engine doesn’t know or care that it’s actually spatialized relative to a different listener than the primary listener. I didn’t design this system, but I think it’s actually pretty clever.

If you know C++, the code that does this rebasing is in ActiveSound.cpp:

	// splitscreen support:
	// we always pass the 'primary' listener (viewport 0) to the sound nodes and the underlying audio system
	// then move the AudioComponent's CurrentLocation so that its position relative to that Listener is the same as its real position is relative to the closest Listener
	const FListener& Listener = AudioDevice->Listeners[ 0 ];

	int32 ClosestListenerIndex = 0;

	if (AudioDevice->Listeners.Num() > 0)
	{
		SCOPE_CYCLE_COUNTER( STAT_AudioFindNearestLocation );
		ClosestListenerIndex = FindClosestListener(AudioDevice->Listeners);
	}

	const FListener& ClosestListener = AudioDevice->Listeners[ ClosestListenerIndex ];

// SNIP

	// if the closest listener is not the primary one, transform CurrentLocation
	if( ClosestListenerIndex != 0 )
	{
		ParseParams.Transform = ParseParams.Transform * ClosestListener.Transform.Inverse() * Listener.Transform;
	}

The multiple listeners are automatically updated based on the “viewport index” – i.e. the the viewport on the world. Each viewport automatically has a listener. The code that does this is in GameViewportClient.cpp:

								uint32 ViewportIndex = PlayerViewMap.Num() - 1;
								AudioDevice->SetListener(ViewportIndex, ListenerTransform, (View->bCameraCut ? 0.f : GetWorld()->GetDeltaSeconds()), PlayerAudioVolume, PlayerInteriorSettings);

Where SetListener is implemented as:

void FAudioDevice::SetListener( const int32 InViewportIndex, const FTransform& InListenerTransform, const float InDeltaSeconds, class AAudioVolume* Volume, const FInteriorSettings& InteriorSettings )
{
	FTransform ListenerTransform = InListenerTransform;
	
	if (!ensureMsgf(ListenerTransform.IsValid(), TEXT("Invalid listener transform provided to AudioDevice")))
	{
		// If we have a bad transform give it something functional if totally wrong
		ListenerTransform = FTransform::Identity;
	}

	if( InViewportIndex >= Listeners.Num() )
	{
		UE_LOG(LogAudio, Log, TEXT( "Resizing Listeners array: %d -> %d" ), Listeners.Num(), InViewportIndex );
		Listeners.AddZeroed( InViewportIndex - Listeners.Num() + 1 );
	}

	Listeners[ InViewportIndex ].Velocity = InDeltaSeconds > 0.f ? 
											(ListenerTransform.GetTranslation() - Listeners[ InViewportIndex ].Transform.GetTranslation()) / InDeltaSeconds
											: FVector::ZeroVector;

	Listeners[ InViewportIndex ].Transform = ListenerTransform;

	Listeners[ InViewportIndex ].ApplyInteriorSettings(Volume, InteriorSettings);
}

Hey Geoff, thanks for replying!

The FMOD integration has the similar logic for determining listener orientation from the player position to the built in audio in UE4. We then pass that info into the FMOD API.

Internally FMOD’s support for multiple listeners is done slightly differently to how Aaron describes the inbuilt audio. There isn’t anything special about listener 0. Instead we have logic in our panner so that it supports being given multiple positions. The gain is calculated based on the closest listener, but there is also an envelopment that depends on the difference of the relative orientations. That smooths over any changes in panning as the listeners move closer or further out from the emitter.

However, a silly bug crept into in the integration, so the number of listeners was always set as 1. This will be fixed FMOD integration next patch release.

Thanks guys! Brilliant news :slight_smile:

Out of curiosity, was a system where audio time is progressed but a channel isn’t used ever developed?

I’m currently working towards a system where I’d like audio to play based on a world time, the mathematics of how to work out what time that should be is fine, but I’m unable to establish where a piece of audio has its “start playing again” function for the player entering the falloff.

I’d also like to know if there’s a simple way to separate the creation of an Audio Listener from GameViewportClient? I’d like to have a second Audio Listener without everything else that comes with the viewport client as it’s simply going to be an abstracted 2nd position that the player can listen from (hopefully with the ability to hear from both listeners simultaneously).

This should probably be a separate post since it’s not exactly related to split-screen.

Out of curiosity, was a system where
audio time is progressed but a channel
isn’t used ever developed?

If you’re talking about virtual voices, then no. The new audio engine will support that, but that’s not going to be available for a while.

I’m currently working towards a system… …entering the falloff.

Not really sure what you’re talking about here…

I’d also like to know if there’s a
simple way to separate the creation of
an Audio Listener from
GameViewportClient? I’d like to have a
second Audio Listener without
everything else that comes with the
viewport client as it’s simply going
to be an abstracted 2nd position that
the player can listen from (hopefully
with the ability to hear from both
listeners simultaneously).

You probably don’t want to hear from 2 listeners at the same time. A “listener” is really only used to calculate spatialization – to get both the panning and the distance-attenuation values for a playing sound. Usually, for split-screen, as mentioned here, the “closest” listener is used to do the spatialization calculations from. If you heard a sound play for both listeners you would get a ton of double-triggering and I guarantee that it would not only sound very bad but it would use more resources than you expect – i.e. 2x the number of voices.

As for detaching the listener from the GameViewportClient – UE4 doesn’t support this now but it would be trivial to detach it or make the listener an independent component attachable to any other component. I do not think this would be a good idea for most cases since you’d get issues of sounds being spatialized behind you but seeing the sound-source in front of you, etc.

One thing that is a common approach for “listeners” in 3rd person games is to separate the distance-attenuation and panning calculations of the listener. 1) do the distance attenuation based on the distance of sounds to the controlled character and 2) do panning based on position of the camera (i.e. GameViewportClient). This way you avoid the problem of sounds being too quiet when your camera is pulled away from your character (you’re always above the action) and you avoid the problem of sounds spatializing behind you when they’re actually in front of you (which is unsettling). We don’t support this mode in UE4 now but I want to add it at some point. Should be easy.

Thanks for getting back to me, not sure at this point if I should post a new question or respond here?

Old thread, i know, but how can I make sure that the listener for each player is not the camera but the character mesh? it gets really freaky to hear a sound at max volume only when the mesh is turned away from the sound origin…
obviously i would like to keep the vanilla functionality of unreal where it determines who’s closer to a sound, as described here https://www.unrealengine.com/en-US/tech-blog/split-screen-audio-in-unreal-engine-explained
If anyone could help me i would be eternally grateful
btw i’m working on Unreal 5.4 BLUEPRINT