I don’t think you’re understanding what’s going on here… So the cap is there because the system is actually not allocating a new buffer each callback so 8k sample max is there because there’s a max buffer size already allocated in the lower level.
It’s also intended for real-time (procedural) generation of audio data. If you want to generate a precise 1 second of audio data only all at once, feel free to do that! But you’ll have to feed it the procedural audio callback in small chunks due to the way the whole system is designed. It’s not designed to allow you to feed one big chunk of data all at once and call it day.
You’ll need to track the current sample read position against the total number of samples generated and copy only the amount of data the callback wants.
I don’t understand what you’re trying to do. You don’t call the callback, the playing sound wave does.
Yeah, I ran into this issue too.
I’m going to post some code since I’m 99% sure this is an ok usage since it’s not technically engine code and doesn’t live anywhere. I’m writing it here in the code block.
So my constructor looks like this:
UMySoundWaveProcedural::UMySoundWaveProcedural(const FObjectInitializer& ObjectInitializer)
// Set protected variables in base class that need to be set for procedural sounds
NumChannels = 1;
bCanProcessAsync = true;
Duration = INDEFINITELY_LOOPING_DURATION;
bLooping = false;
SampleRate = 44100;
// Initialize the oscillator to 440.0f
// Bind the GenerateData callback with FOnSoundWaveProceduralUnderflow object
OnSoundWaveProceduralUnderflow = FOnSoundWaveProceduralUnderflow::CreateUObject(this, &UMySoundWaveProcedural::GenerateData);
Note that I’m indicating the number of channels, that it can run async, and that it has an infinite duration. I actually haven’t played yet with setting the Duration to anything other than infinite, so come to think of it, it might work to set that to 1.0 second if you want your procedural sound to only last one second. I’ll have to play with that.
Note too the sinusoidal oscillator and hooking up the callback/delegate. My class is actually NOT overriding anything in the base class (USoundWaveProcedural) including Parse or anything else. No need to.
In my GenerateData function (which was hooked up to the delegate callback) I have:
void UMySoundWaveProcedural::GenerateData(USoundWaveProcedural* InProceduralWave, int32 SamplesRequested)
const int32 QueuedSamples = GetAvailableAudioByteCount() / sizeof(int16);
const int32 SamplesNeeded = SamplesRequested - QueuedSamples;
for (int32 i = 0; i < SamplesNeeded; ++i)
float SampleValueFloat = SineOsc.NextSample();
int16 SampleValue = (int16)(32767.0f * SampleValueFloat);
// Now call the audio queue to queue up some random data
InProceduralWave->QueueAudio((uint8*)SampleData.GetData(), SampleData.Num() * sizeof(int16));
This code could be optimized by removing the SampleData TArray and just writing the output directly to the USoundWaveProcedural::QueueAudio function but I wrote it that way to keep it simple and separate out the float-audio sample data and 16-bit PCM audio buffer and the raw byte output buffer. The final step is a bit annoying that you have to QueueAudio as raw byte data. I’d MUCH prefer to be able to allow you to queue float data and handle the conversion to 16-bit PCM internally. It is on my backlog to make doing all this much easier. Procedural sound wave’s were originally made to support feeding raw PCM audio data directly to voices (for VOIP or audio codecs from video playback, etc) and most low-level audio systems deal with raw PCM data rather than float buffers (which is obviously preferable for DSP).
Assuming you know how to write a decent sinusoidal audio generator, this is literally all you really need to get a sine-tone working in a procedural sound wave.
The best way to actually HEAR the thing, in my opinion, is to create a wrapping component that wraps your procedural sound wave and allows you to interface with it. This is pretty much similar to how an “AudioComponent” wraps a USoundBase* type. In this case your component will wrap your specific procedural sound wave type. Then once you have a component created, you just need to make a nice BP api to interact with it.
Here’s the simple component that wraps the procedural sound wave:
UCLASS(ClassGroup = SoundWaveProcedural, meta = (BlueprintSpawnableComponent))
class MYPROJECT_API UMySoundWaveProceduralComponent : public UActorComponent
/** Sets and then plays the sine wave frequency to the given frequency in hertz */
UFUNCTION(BlueprintCallable, Category = "Components|Audio", meta = (Keywords = "ProceduralSound"))
void PlaySineWaveFrequency(float InFrequency = 440.0f);
/** Attenuation settings to use for the procedural sound wave */
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Attenuation)
class USoundAttenuation* AttenuationSettings;
/* The procedural sound wave to use */
Note you can add other audio objects you might want to use with your thing. Every audio engine feature should work with procedural sound waves (e.g. AttenuationSettings, Concurrency, etc). Also persnickity here is the use of the UPROPERTY(Instanced) on the UMySoundWaveProcedural type member variable. That’s important for the UE4 property serializer and how to treat it in the context of the rest of the data. You’ll get mysterious errors if you don’t use that.
Here’s my PlaySineWaveFrequency function which actually creates a new instance of the procedural sound wave and then plays it using the normal BP static function for playing a sound.
void UMySoundWaveProceduralComponent::PlaySineWaveFrequency(float InFrequency)
// Get component's owner
AActor* Owner = GetOwner();
// Create the procedural sound wave
MySoundWaveProcedural = NewObject<UMySoundWaveProcedural>(Owner->GetLevel(), TEXT("SoundWaveProcedural"));
// Set the sinusoidal oscillator frequency
UGameplayStatics::PlaySoundAtLocation(GetWorld(), MySoundWaveProcedural, Owner->GetActorLocation(), 0.25f, 1.f, 0.0f, AttenuationSettings);
Note that I am getting the component’s owner, then getting the owner’s level and using that as the outer for the NewObject function. There’s some funny business with “outer” package ownership semantics and if you don’t get it right, there will be issues with the garbage collector. This is what I found to be the best/easiest to use though there are probably other ways to deal with it.
Also note that I am using the PlaySoundAtLocation API call which is basically a “one-shot” type call so that once it plays in space, you’re not going to be able to move it around. If you want to attach the procedural sound to something or control its 3d position in the audio engine you should use SpawnSound* flavored functions, which generate an audio component. You should be able to use the audio component like you would for any normal use-case, e.g. change volume, pitch, stop, start, etc.
Ok, so as for your other questions – like terminating the sound, etc. Procedural sound waves are obviously more complex than normal sound waves since the lower level audio engine can’t know the duration. You’ll have to manage it yourself. For example, if your procedural sound wave is only supposed to last 1 second, you’re going to have to keep track of how much audio you’ve generated (count samples and divide by sample rate!) and stop feeding out audio if you’re done. And put the thing into a state which allows you to stop the sound outside of the procedural sound wave. For example, add an API call to your component that queries your sound wave if its done (e.g. IsDone()). Then have another function which stops procedural sound components which are done.
When I get a chance, I’d like to implement ways to make procedural sound waves more easy to utilize so this was a productive thread. Thanks.
Let me know what cool stuff you come up with! The possibilities are endless…