Today I am investigating the block of code in AudioComponent::PlayInternal
mentioned above where SoundParams are being set.
AudioComponent->PlayInternal()
This block (AudioComponent.cpp line 856) starts by pulling DefaultParameters
to a local array:
TArray<FAudioParameter> SoundParams = DefaultParameters;
The AudioComponent header file has interesting comments about this:
/** Array of parameters for this AudioComponent. Changes to this array directly will
* not be forwarded to the sound if the component is actively playing, and will be superseded
* by parameters set via the actor interface if set, or the instance parameters.
*/
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Parameters, meta = (DisplayAfter = "bDisableParameterUpdatesWhilePlaying"))
TArray<FAudioParameter> DefaultParameters;
/** Array of transient parameters for this AudioComponent instance. Not serialized and can be set by code or BP.
* Changes to this array directly will not be forwarded to the sound if the component is actively playing.
* This should be done via the 'SetParameterX' calls implemented by the ISoundParameterControllerInterface.
* Instance parameter values superseded the parameters set by the actor interface & the components default
* parameters.
*/
UPROPERTY(Transient)
TArray<FAudioParameter> InstanceParameters;
In PlayInternal
, DefaultParameters
are pulled first, then some merge happens with the Owner actor, then the InstanceParameters
are merged. Then the sound is sent to the AudioDevice with the fully formed sound parameters.
The thing with the Owner is interesting:
if (AActor* Owner = GetOwner())
{
TArray<FAudioParameter> ActorParams;
UActorSoundParameterInterface::Fill(Owner, ActorParams);
FAudioParameter::Merge(MoveTemp(ActorParams), SoundParams);
}
UActorSoundParameterInterface::Fill
takes the Owner, and pushes any Audio parameters set if the Actor implements UActorSoundParameterInterface
The interface is described as /** Interface used to allow an actor to automatically populate any sounds with parameters */
I don’t know if Actors implement this out of the box. Potentially I can implement this interface in my custom actor class, forcing me to add a GetActorSoundParams
… or maybe GetActorSoundParams_Implementation
. I’d have to check which one.
Here’s the point, and this is the thing I probably should’ve picked up on earlier: All these merge functions, all this parameter setup is being done via:
FAudioParameter
This is a struct describing the properties related to an AudioComponent. Each property is essentially a wrapper around an enum of AudioTypes.
This is interesting because I recognize the types as the list of available properties that can be set as inputs in MetaSounds. It would make sense there would be C++ for this.
What is sus is the lack of “Soundwave” anywhere in here.
There is a single mention of SoundWave:
// Object value (types other than SoundWave not supported by legacy SoundCue system)
Object,
Now I recognize from other parts of the engine code I investigated that AudioComponent->SetWaveParameter
is a wrapper around a SetObject.
So safe to say that at this level of the code, a SoundWave is just an object in terms of AudioParameters.
This struct isn’t all that interesting, which should generally be true of structs. But it does confirm that if I can get this to set, in theory the MetaSound will pick it up.
Next Steps
The problem I’m trying to solve is Setting Parameters for a MetaSound does not work intuitively with PlayQuantized()
The way I see it, I have three candidates for setting:
- Have my AudioCuePlayer class implement
UActorSoundParameterInterface
, see if this sets the value correctly.
- Attempt to wrap my SoundWave in a FAudioParameter and stuff it in
AudioComponents.DefaultParameters
- Or do the same and stuff it in
InstanceParameters
- Hell see if I can do both, or all three.
My hope is that setting an audio parameter this way avoids the IsPlaying()
requirement that AudioComponent->SetParameters
has- the theory being that PlayQuantized()
sets IsPlaying to true on the audio thread timing, where SetParameters is being set on the game thread timing, resulting in the clipped audio I hear when testing.
If I can set these audio parameters before play is initiated, the theory is PlayQuantized() will play the sound and already have the juice it needs. You know I’m here for that juice.
I’ll report back tomorrow to see if this idea works.