Dynamic Game OST System (Enabling FULL Audio Loop Control)

I have noticed that the audio team changed the AudioComponent “Again”, I must confess that on this time they gone far from what I can domain/fix/adapt, so maybe I telling what I did they can put this on “the right way” on the engine and everyone can use it.

Fire an audio and let it play is the basic thing, but regarding game music there are too “few” features our industry does offer to developers, maybe because programmers has a lot more contact with the graphic art team (understanding graphic artist needs) than with musicians (that are on most part, thirdy parties). Getting into contact with a partner composer that joined my project to write the game theme, I saw how they work, and while the fades between themes are called dynamic music systems, this really don’t get near what we saw (hear should be the word) in cinematic experiences where the suites “parts” are written on the length of each sequence.

Get this kind of relationship on a game (where what you hear helps to tell what you see on the screen) would be great, but “sync” the diverse instruments necessary to change the mood plus make then fit exactly on the composition “tempos” would be a hell to manage, so I got to the idea of controlled full movement “loops” between a non looping “intro” and a non-looping “outro”. This of course does require some coordination with your composer, he need to write and record the “middle” movements of the suite in such way they can be played looping independently and, of course all then must be allowed to “transit” to the outro.

ab45a68801d3b8c2b76162c0df02b941bf879a48.jpeg

This offers a lot of possibilities, but keeping the “integrity” of the original composition as example, instead of simply fade an exploration movement, we wait for it to finish and after play the outro. Also we have the possibility of make combinations triggered by the most diverse events on the game…

Requirements…

To get such synchronism with the game, we need a callback from the Audio system each time a loop “loops”, so we can decide what will happen on next on our music track, so we can know exactly when the loop ended to play the desired segment.

90d67675f6fc041f0cd276999cc985b012257683.jpeg

Going Unreal…

When I decided to implement this the docs and forums does pointed that by “the default nodes” present on CUEs we would not get this result “seamlessly”, because sounds intended to loop are manipulated differently by the engine than the “fire once and forget” ones.

Strictly (by what I could get) sounds intended to loop are in some way “fixed” on the memory and on each loop the engine just pull the “phonograph needle” back and still playing, on the other hand sounds intended to play once are put on memory (and throw away) on each play. The problem is that until 4.14 just the play once sounds fires the AudioComponent event telling that they finished playing. “BUT” the audio team has an internal midterm condition on the soundnode that notifies that (LOOP_WithNotification), so, if we “forcebroadcast” the audiocomponent->OnAudioFinished() that is playing the sound on this we have a way to know when a loop ended plus avoid the “gap” between sounds playing.

I got this working (until 4.12, my last try on 4.14 got troubles with the new way to access audiocomponents), here is a video showing it…

Also if someone got interested in the code (while it’s clearly a hack), here we go…

THE .H



#pragma once
#include "Sound/SoundWave.h"
#include "Sound/SoundNodeAssetReferencer.h"
#include "Components/AudioComponent.h"
#include "VFakeLoopSoundNodeWavePlayer.generated.h"

/**
 * 
 */
UCLASS(hidecategories = Object, editinlinenew, MinimalAPI, meta = (DisplayName = "Fake Loop Player"))
class UVFakeLoopSoundNodeWavePlayer :  public USoundNodeAssetReferencer //public USoundNodeWavePlayer //public  USoundNode
{
	GENERATED_UCLASS_BODY()
private:
	UPROPERTY(EditAnywhere, Category = WavePlayer, meta = (DisplayName = "Sound Wave"))
	TAssetPtr<USoundWave> SoundWaveAssetPtr;

	UPROPERTY(transient)
	USoundWave* SoundWave;

	void OnSoundWaveLoaded(const FName& PackageName, UPackage * Package, EAsyncLoadingResult::Type Result);
	
	TWeakObjectPtr<class UAudioComponent> AUComponentToFire;

public:
	UPROPERTY(EditAnywhere, Category = WavePlayer)
	uint32 bLooping : 1;

	// Begin USoundNode Interface
	virtual int32 GetMaxChildNodes() const override;
	virtual bool NotifyWaveInstanceFinished(struct FWaveInstance* WaveInstance) override;
	virtual float GetDuration() override;
	virtual void ParseNodes(FAudioDevice* AudioDevice, const UPTRINT NodeWaveInstanceHash, FActiveSound& ActiveSound, const FSoundParseParameters& ParseParams, TArray<FWaveInstance*>& WaveInstances) override;
#if WITH_EDITOR
	virtual FText GetTitle() const override { return FText::FromString("FakeLoopWavePlayer"); };
#endif
	// Begin USoundNode Interface

	// Begin UObject Interface
#if WITH_EDITOR
	virtual void PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) override { Super::Super::PostEditChangeProperty(PropertyChangedEvent); };
#endif
	// End UObject Interface

	// Begin USoundNodeAssetReferencer Interface
	virtual void LoadAsset(bool bAddToRoot) override;// { Super::Super::LoadAsset(); };
	// End USoundNode Interface

	USoundWave * GetSoundWave() const { return SoundWave; }
	void SetSoundWave(USoundWave* SoundWave);

	void ParseSoundWaveWithNotifyLoop(USoundWave* SoundWaveInst, FAudioDevice* AudioDevice, const UPTRINT NodeWaveInstanceHash, FActiveSound& ActiveSound, const FSoundParseParameters& ParseParams, TArray<FWaveInstance*>& WaveInstances);
};

THE .CPP



#include "Sound/SoundNodeWavePlayer.h"
#include "VFakeLoopSoundNodeWavePlayer.h"

UVFakeLoopSoundNodeWavePlayer::UVFakeLoopSoundNodeWavePlayer(const FObjectInitializer& ObjectInitializer) : Super(ObjectInitializer)
{
}

void UVFakeLoopSoundNodeWavePlayer::LoadAsset(bool bAddToRoot)
{
	if (IsAsyncLoading())
	{
		SoundWave = SoundWaveAssetPtr.Get();
		if (SoundWave == nullptr)
		{
			const FString LongPackageName = SoundWaveAssetPtr.GetLongPackageName();
			if (!LongPackageName.IsEmpty())
			{
				LoadPackageAsync(LongPackageName, FLoadPackageAsyncDelegate::CreateUObject(this, &UVFakeLoopSoundNodeWavePlayer::OnSoundWaveLoaded));
			}
		}
	}
	else
	{
		SoundWave = SoundWaveAssetPtr.LoadSynchronous();
	}
}

void UVFakeLoopSoundNodeWavePlayer::OnSoundWaveLoaded(const FName& PackageName, UPackage * Package, EAsyncLoadingResult::Type Result)
{
	if (Result == EAsyncLoadingResult::Succeeded)
	{
		SoundWave = SoundWaveAssetPtr.Get();
	}
}

void UVFakeLoopSoundNodeWavePlayer::ParseNodes(FAudioDevice* AudioDevice, const UPTRINT NodeWaveInstanceHash, FActiveSound& ActiveSound, const FSoundParseParameters& ParseParams, TArray<FWaveInstance*>& WaveInstances)
{
	if (SoundWave)
	{
		// The SoundWave's bLooping is only for if it is directly referenced, so clear it
		// in the case that it is being played from a player
		bool bWaveIsLooping = SoundWave->bLooping;
		SoundWave->bLooping = false;

		//if (bLooping)
		//{
		FSoundParseParameters UpdatedParams = ParseParams;
		UpdatedParams.bLooping = bLooping;
		UpdatedParams.NotifyBufferFinishedHooks.AddNotify(this, NodeWaveInstanceHash);
		
                if(ActiveSound.GetAudioComponent() != NULL)
			AUComponentToFire = ActiveSound.GetAudioComponent(); // HERE WE FIND AND ASSIGN "WHO" TO NOTIFY
		
                ParseSoundWaveWithNotifyLoop(SoundWave, AudioDevice, NodeWaveInstanceHash, ActiveSound, UpdatedParams, WaveInstances);
		//}
		//else
		//{
		//	SoundWave->Parse(AudioDevice, NodeWaveInstanceHash, ActiveSound, ParseParams, WaveInstances);
		//}

		SoundWave->bLooping = bWaveIsLooping;
	}
}

bool UVFakeLoopSoundNodeWavePlayer::NotifyWaveInstanceFinished(struct FWaveInstance* WaveInstance)
{
	if (AUComponentToFire != NULL)
		AUComponentToFire->OnAudioFinished.Broadcast(); // THIS DOES OUR TRICK...

	return Super::NotifyWaveInstanceFinished(WaveInstance);
}

float UVFakeLoopSoundNodeWavePlayer::GetDuration()
{
	float Duration = 0.f;
	if (SoundWave)
	{
		if (bLooping)
		{
			Duration = INDEFINITELY_LOOPING_DURATION;
		}
		else
		{
			Duration = SoundWave->Duration;
		}
	}
	return Duration;
}

//#if WITH_EDITOR
//FString UVFakeLoopSoundNodeWavePlayer::GetTitle() const
//{
//	FText SoundWaveName;
//	if (SoundWave)
//	{
//		SoundWaveName = FText::FromString(SoundWave->GetFName().ToString());
//	}
//	else
//	{
//		SoundWaveName = FText::FromString("NONE");
//	}
//
//	FString Title;
//
//	//if (bLooping)
//	//{
//	//	FFormatNamedArguments Arguments;
//	//	Arguments.Add(TEXT("Description"), FText::FromString(Super::GetTitle()));
//	//	Arguments.Add(TEXT("SoundWaveName"), SoundWaveName);
//	//	Title = SoundWaveName.ToString();
//	//}
//	//else
//	//{
//		Title = Super::GetTitle() + FString(TEXT(" : ")) + SoundWaveName.ToString();
//	//}
//
//	return Title;
//}
//#endif

int32 UVFakeLoopSoundNodeWavePlayer::GetMaxChildNodes() const
{
	return 0;
}

void UVFakeLoopSoundNodeWavePlayer::ParseSoundWaveWithNotifyLoop(USoundWave* SoundWaveInst, FAudioDevice* AudioDevice, const UPTRINT NodeWaveInstanceHash, FActiveSound& ActiveSound, const FSoundParseParameters& ParseParams, TArray<FWaveInstance*>& WaveInstances)
{
	FWaveInstance* WaveInstance = ActiveSound.FindWaveInstance(NodeWaveInstanceHash);

	// Create a new WaveInstance if this SoundWave doesn't already have one associated with it.
	if (WaveInstance == NULL)
	{
		if (!ActiveSound.bRadioFilterSelected)
		{
			ActiveSound.ApplyRadioFilter(ParseParams);
		}
		
		WaveInstance = SoundWaveInst->HandleStart(ActiveSound, NodeWaveInstanceHash);
	}

	// Looping sounds are never actually finished
	if (bLooping || ParseParams.bLooping)
	{
		WaveInstance->bIsFinished = false;
//#if !NO_LOGGING
//		if (!ActiveSound.bWarnedAboutOrphanedLooping && !ActiveSound.AudioComponent.IsValid())
//		{
//			UE_LOG(LogAudio, Warning, TEXT("Detected orphaned looping sound '%s'."), *ActiveSound.Sound->GetName());
//			ActiveSound.bWarnedAboutOrphanedLooping = true;
//		}
//#endif
	}

	// Check for finished paths.
	if (!WaveInstance->bIsFinished)
	{
		// Propagate properties and add WaveInstance to outgoing array of FWaveInstances.
		WaveInstance->Volume = ParseParams.Volume * SoundWaveInst->Volume;
		WaveInstance->VolumeMultiplier = ParseParams.VolumeMultiplier;
		WaveInstance->Pitch = ParseParams.Pitch * SoundWaveInst->Pitch;
		// WaveInstance->HighFrequencyGain = ParseParams.HighFrequencyGain; 4.11
		WaveInstance->bApplyRadioFilter = ActiveSound.bApplyRadioFilter;
		WaveInstance->StartTime = ParseParams.StartTime;
		WaveInstance->UserIndex = ActiveSound.UserIndex;
		WaveInstance->OmniRadius = ParseParams.OmniRadius;

		bool bAlwaysPlay = false;

		// Properties from the sound class
		WaveInstance->SoundClass = ParseParams.SoundClass;
		if (ParseParams.SoundClass)
		{
			FSoundClassProperties* SoundClassProperties = AudioDevice->GetSoundClassCurrentProperties(ParseParams.SoundClass);
			// Use values from "parsed/ propagated" sound class properties
			WaveInstance->VolumeMultiplier *= SoundClassProperties->Volume;
			WaveInstance->Pitch *= SoundClassProperties->Pitch;
			//TODO: Add in HighFrequencyGainMultiplier property to sound classes


			WaveInstance->VoiceCenterChannelVolume = SoundClassProperties->VoiceCenterChannelVolume;
			WaveInstance->RadioFilterVolume = SoundClassProperties->RadioFilterVolume * ParseParams.VolumeMultiplier;
			WaveInstance->RadioFilterVolumeThreshold = SoundClassProperties->RadioFilterVolumeThreshold * ParseParams.VolumeMultiplier;
			WaveInstance->StereoBleed = SoundClassProperties->StereoBleed;
			WaveInstance->LFEBleed = SoundClassProperties->LFEBleed;

			WaveInstance->bIsUISound = ActiveSound.bIsUISound || SoundClassProperties->bIsUISound;
			WaveInstance->bIsMusic = ActiveSound.bIsMusic || SoundClassProperties->bIsMusic;
			WaveInstance->bCenterChannelOnly = ActiveSound.bCenterChannelOnly || SoundClassProperties->bCenterChannelOnly;
			WaveInstance->bEQFilterApplied = ActiveSound.bEQFilterApplied || SoundClassProperties->bApplyEffects;
			WaveInstance->bReverb = ActiveSound.bReverb || SoundClassProperties->bReverb;
			WaveInstance->OutputTarget = SoundClassProperties->OutputTarget;

			bAlwaysPlay = ActiveSound.bAlwaysPlay || SoundClassProperties->bAlwaysPlay;
		}
		else
		{
			WaveInstance->VoiceCenterChannelVolume = 0.f;
			WaveInstance->RadioFilterVolume = 0.f;
			WaveInstance->RadioFilterVolumeThreshold = 0.f;
			WaveInstance->StereoBleed = 0.f;
			WaveInstance->LFEBleed = 0.f;
			WaveInstance->bEQFilterApplied = ActiveSound.bEQFilterApplied;
			WaveInstance->bIsUISound = ActiveSound.bIsUISound;
			WaveInstance->bIsMusic = ActiveSound.bIsMusic;
			WaveInstance->bReverb = ActiveSound.bReverb;
			WaveInstance->bCenterChannelOnly = ActiveSound.bCenterChannelOnly;

			bAlwaysPlay = ActiveSound.bAlwaysPlay;
		}

		WaveInstance->Priority = WaveInstance->Volume + (bAlwaysPlay ? 1.0f : 0.0f) + WaveInstance->RadioFilterVolume;
		WaveInstance->Location = ParseParams.Transform.GetTranslation();
		WaveInstance->bIsStarted = true;
		WaveInstance->bAlreadyNotifiedHook = false;
		WaveInstance->bUseSpatialization = ParseParams.bUseSpatialization;
		WaveInstance->SpatializationAlgorithm = ParseParams.SpatializationAlgorithm;
		WaveInstance->WaveData = SoundWaveInst;
		WaveInstance->NotifyBufferFinishedHooks = ParseParams.NotifyBufferFinishedHooks;
		WaveInstance->LoopingMode = LOOP_WithNotification; //((bLooping || ParseParams.bLooping) ? LOOP_WithNotification : LOOP_Never); FORCING THE SHOOT

		if (AudioDevice->IsHRTFEnabledForAll() && ParseParams.SpatializationAlgorithm == SPATIALIZATION_Default)
		{
			WaveInstance->SpatializationAlgorithm = SPATIALIZATION_HRTF;
		}
		else
		{
			WaveInstance->SpatializationAlgorithm = ParseParams.SpatializationAlgorithm;
		}

		// Don't add wave instances that are not going to be played at this point.
		if (WaveInstance->Priority > KINDA_SMALL_NUMBER)
		{
			WaveInstances.Add(WaveInstance);
		}

		// We're still alive.
		ActiveSound.bFinished = false;

		// Sanity check
		if (SoundWaveInst->NumChannels >= 2 && WaveInstance->bUseSpatialization && !WaveInstance->bReportedSpatializationWarning)
		{
			static TSet<USoundWave*> ReportedSounds;
			if (!ReportedSounds.Contains(SoundWaveInst))
			{
				FString SoundWarningInfo = FString::Printf(TEXT("Spatialisation on stereo and multichannel sounds is not supported. SoundWave: %s"), *GetName());
				if (ActiveSound.Sound != SoundWaveInst)
				{
					SoundWarningInfo += FString::Printf(TEXT(" SoundCue: %s"), *ActiveSound.Sound->GetName());
				}

				if (ActiveSound.GetAudioComponent() != NULL)
				{
					// TODO - Audio Threading. This log would have to be a task back to game thread
					AActor* SoundOwner = ActiveSound.GetAudioComponent()->GetOwner();
					UE_LOG(LogAudio, Warning, TEXT("%s Actor: %s AudioComponent: %s"), *SoundWarningInfo, (SoundOwner ? *SoundOwner->GetName() : TEXT("None")), *ActiveSound.GetAudioComponent()->GetName());
				}
				else
				{
					UE_LOG(LogAudio, Warning, TEXT("%s"), *SoundWarningInfo);
				}

				ReportedSounds.Add(SoundWaveInst);
			}
			WaveInstance->bReportedSpatializationWarning = true;
		}
	}
}


I would like to suggest to the audio team to implement this on some more accessible (and right) way, while I don’t know the most internal implications, the engine can do this and open new possibilities to us creators. XD

Best Regards.

creasso