Stop using private variables in virtual functions please!

This should be self-explanatory, but I’ve noticed 3-4 times in the audio source code that some key variable is marked as private! Whomever is over audio, stop doing this please. Please use the protected key word for all variables used by virtual functions in classes instead. If you play with inheriting from SoundNode or SoundWaveProcedural classes, you’ll find this an issue in both 4.12.5 and 4.13 preview.

As a note, this should be standard policy. For example, (4.13 preview 3) If I want to reuse some code from the parent classes’ implementation of GeneratePCMData from my class which inherits, I cannot call PumpAudio() or access other member variables used in the parent classes’ implementation. This is from trying to learn how to generate my own audio sound procedurally.


This! Been working on a fix for VoiceChat in 4.12.5 and noticed this as well.


If you run into these issues though, I’d submit a pull request. At least that way specific instances you may be using might make it in faster. Might.

Hey there. Audio programmer here.

If you notice, there are specifically tagged access areas in the base class USoundWaveProcedural. For 4.13, for data that is access on the audio device thread, I intentionally chose to make that data private. In 4.12, there were a number of people who were using that data in an unsafe way and were generating crashes. The reason this needed to change was that 4.12 changed the way audio buffers are generated on PC (both for procedural waves but also real-time async decoding sound waves). Prior to 4.12, all generating code was called on the game thread. This caused significant buffer underruns (stuttering) any time the game thread was blocked or halted for any reason. It was particularly problematic during load screens. I switched to using XAudio2 callbacks (e.g. OnBufferEnd) to allow the voices themselves to notify when it needs more audio (on the hardware thread), rather than depending on a game-thread-dependent poll/pump. This allows audio to play without buffer underruns but does add greater complexity to the implementation as audio buffer data needs to be prepared in a thread-safe way.

For maximum simplicity in implementing procedural sound waves, I recommend implementing procedural sound waves by not overriding any virtual functions. I’d prefer to make all its overrides non-virtual (i.e. remove the word virtual from any functions in USoundWaveProcedural). Instead, for your base class, all you need to do is register a single delegate function.

For example, here’s what I used in some code I implemented recently (for our internal QA team to test procedural sound waves):

OnSoundWaveProceduralUnderflow = FOnSoundWaveProceduralUnderflow::CreateUObject(this, &UQASoundWaveProcedural::GenerateData);

The signature of my GenerateData delegate is:

void UQASoundWaveProcedural::GenerateData(USoundWaveProcedural* InProceduralWave, int32 SamplesRequested)

Then you put your generating code in that function. This function is still called on the audio device thread (on PC), so you’ll need to take care to ensure any data or events you want from the game thread are safely transferred to the audio device thread. This thread safety issue is only currently an issue on PC, but I’ve based the upcoming audio mixer code (new multi-platform backend) on this paradigm so it’ll be good to make sure your code is thread safe no matter what platform you’re targeting.

I’m planning on writing up a tutorial/blog series on real-time synthesis, filtering, DSP, for UE4 audio… as soon as the multi-platform backend is out and available for people to use.

EDIT: After some feedback from Epic legal, I removed the posted source since these forums are public and I can’t post long-form code. Writing only snippets would probably cause more confusion. So here’s a high-level implementation guide:

In general a good paradigm for implementing procedural sound waves is to first create a component container (inherited from UActorComponent etc) that has a handle to the instance of the procedural sound wave. Then, when you create the instance, you’ll be able to pass data to it from BP by writing a BP API for your component. You’ll also be able to use any the static gameplay functions we normally use for USoundBase* types (PlaySound/SpawnSound etc) as well as any other related audio types (Concurrency, Attenuation, sound classes, etc). Basically you can use your procedural sound wave like any other USoundBase type. The only thing you have to be careful of is to sure that any parameters passed to your USoundWaveProcedural* type is thread safe (use critical sections or a thread safe queue to pass data to your code).


I realized another bit of complexity I didn’t explain and that also resulted from the switch to calling the GeneratePCMData function from the audio hardware thread. Since the callback is generated from an XAudio2 OnBufferEnd callback, if no buffer is submitted to the XAudio2 voice, then no more OnBufferEnd callbacks will be made since it no longer has any enqueued buffers. This means the procedural sound wave will just mysteriously fall silent. Therefore, the old paradigm of not returning audio if none is available won’t work. That was what many overrides of GeneratePCMData in certain use-cases were doing, in particular VOIP implementations or other things that depended on systems that may or may not have audio to generate. My base-class implementation of USoundWaveProcedural attempts to handle that case and will always return audio buffers even if no audio has been queued. It also attempts to wait until a certain amount of audio buffers (to “build up” audio buffers) before starting. This is to support streaming systems or VOIP streams that may not have a enough audio ready at first. The amount to wait until feeding audio out is configurable by NumSamplesToGeneratePerCallback. This also determines the general cost of the procedural sound wave. Larger NumSamplesToGeneratePerCallback will reduce CPU cost but increase latency (for real-time synthesis that gets param data from the game thread, this will mean that your synthesizer will respond more slowly to parameters, etc). Also, the larger the NumSamplesToGeneratePerCallback, the fewer OnSoundWaveProceduralUnderflow delegate callbacks will be made per GeneratePCMData callback.

Also, the amount of silent audio to write out in the case of real buffer underrun is also configurable with NumBufferUnderrunSamples. This is to decouple the amount of silent audio written out from the number of samples we normally generate per callback (i.e. you may want to reduce the size of the buffer underrun and ensure that the xaudio2 voice performs an OnBufferEnd callback faster or shorter for silent buffers than for audio-filled buffers).

Another thing I didn’t mention is that 4.13 has had a major threading refactor in general. The entire audio engine can now run on a separate thread (using a named “audio thread”). Note that this is a different thread than the thread I mentioned above (which I call the “audio hardware thread” (or “device” thread)). The “audio thread” is a thread which runs all the code which parses sound cues, sound waves, active sounds, concurrency, etc. All that logic can get a bit heavy and in some cases cause slow-downs on the game thread (especially for large numbers of objects). The “audio hardware thread” is similar to the RHI thread (Rendering Hardware Interface) in the rendering engine and the “audio thread” is similar the “render thread”.

What this means is that overriding sound nodes and messing with sound node data is going to also be potentially not thread safe if done directly on the data. In cases where data was previous public or protected, we’ve been privatizing that data and providing accessors that are thread safe to pass data using the task manager system. Since this is a huge change, we decided to ship 4.13 with the audio thread off by default (it is toggle-able using the UseAudioThread value in BaseEngine.ini) to give it a release cycle where people can choose to opt-in and adapt their code to the new paradigm. We’ll be turning it on by default in 4.14. Note also that the Editor will always run the audio engine in single-threaded mode since UObject data is writable in the Editor and making that thread safe would’ve require a much larger refactor of the audio engine (i.e. removing all UObject references and writing proxy objects for everything!). Therefore, any unsafe code may run fine in Editor mode but cause issues in runtime.

Thanks for the amazing response! Sad that you got told on by someone in legal, haha. I would love to see those tutorials! The sooner, the better. I’m trying to work on audio for my project, and the threading changes you mentioned are too awesome to not start playing with! I’m still very early in my UE4 knowledge and trying to wrap my head around still understanding what you’re trying to suggest to me, so I’ll explain my understanding and hopefully you can correct me if I’m wrong.

First off, the technical understanding. If I don’t understand this, I’m not getting anywhere.

  1. You want us to create our own function GenerateAudio() function… in our child class as the most convenient.
  2. You want us to add data via QueueAudio(const uint8* AudioData, const int32 BufferSize).

So, here’s a sinusoidal GenerateAudioData function:

I’m supposing that this is acceptable. If I have a raw file format, I’m supposing that I’m just going to have to use GMalloc to allocate the appropriate data there and just load it up directly into memory, then I can play with the data there, and then send a pointer to the start of the array.

I’m just going to register the function like so:

Provides me with: error C2665: 'TBaseDelegate<TTypeWrapper<void>,USoundWaveProcedural *,int32>::CreateUObject': none of the 2 overloads could convert all the argument types

Hm. I had my hopes up there. Thanks for your response so far, I’ve made a lot of progress so far.

Edit: made loop faster, fixed GenerateAudioData
Editx2: I think I made the code acceptable now. :stuck_out_tongue:

New post because it was 2+ hours since I wrote the above.

Oh, dang. I’m dumb. I removed the “USoundWaveProcedural* InProceduralWave” from the function prototype because I thought it was dumb to include a pointer to myself in a function…

Now that I got this to build, I’m going to look up thread-safe code (Oi! About your second post… don’t really care to do much thread-debugging) I don’t want to constantly do a *.Add() on every generation call and would prefer to have my SoundWaveProcedural have its own static memory I can modify directly. If you had any examples of managing the thread code with memory in the engine, I’d love for you to point it out to me, as I don’t want to constantly reallocate memory on each call to GenerateAudioData. I’ll do more research when I can, as the less I have to debug threaded issues, the better.


I’ll have to play with everything and do testing to make sure that I have acceptable latencies. If I can generate 250ms of sound in roughly 50ms intervals reliably, it should be acceptable. I’ll be sticking around here and hope to see you around- my project is basically 100% audio based, and process-intensive, so I look forward to stretching the new Audio system’s legs.

I’m certain I’ll be asking a question about plugging all this together in the next day or two, haha. A SoundNode is pretty simple- I’ve just added a pointer to my new USoundWaveProcedural.

Being here now is awesome, I can almost taste the procedural sounds! I have an Actor with a SoundSource Component set to my custom SoundNode, which now has a member variable pointing to my SoundWaveProcedural, wherein on ParseNodes I just call “SoundWaveProcedural->Parse(AudioDevice, NodeWaveInstanceHash, ActiveSound, ParseParams, WaveInstances);”

But it crashes. on Parse inside of my overridden ParseNodes.

1. How do I plug this all together safely? If I could define an actual duration and then stop the loop that would be great.
2. How do I write this threadsafe and fast? I don’t want to dynamically allocate and create an array every time GenerateAudioData is called.

Hey Minus_Kelvin, thanks for the explanation.

I’m preparing migration of our code to 4.13 and i’m taken aback by the changes. My old code is very short and simple :

FAudioDevice* AudioDevice = GEngine->GetMainAudioDevice();
for (auto i = AudioDevice->SoundClasses.CreateIterator(); i; ++i)
	USoundClass* SoundClass = i.Key();
	SoundClass->Properties.Volume = MasterVolume;

Basically just a master volume setting, hardly the stuff of nightmares.

Now I’m wondering how I can achieve the same feature in 4.13, with “SoundClasses” now private ?

Yes, moving the audio engine to a separate thread has consequences. And, by the way, that code you wrote is precisely the type of code that we’re trying to break people of the habit of writing. Directly writing into what’s supposed to be read-only memory is a big no-no in general. Not only is it not thread safe, it’s dangerous for other reasons. In general, all interaction with audio engine internals are now hidden. Most things that are appropriate wrap calls to the async task manager which will copy the data and executed safely on the audio thread.

In this case, what you’re trying to do is now supported as a first-class feature in 4.12 (available to all sound designers in BP). The way you’re supposed to dynamically modify sound class volumes is to use the mix system. Check out the UE4 4.12 release notes:

Search that page for “Dynamic Sound class adjustment overrides for sound mixes”.

So basically, the TLDR of the feature is that you can now override any sound class adjustment value in a sound mix from BP. In the case of setting a master volume, you’d just push a new sound mix modifier for your master volume adjustment. Then call SetSoundMixClassOverride with the master volume adjustment mix, the sound classes you want to effect, and the volume you want to set it at. This plays nicely with the rest of the mix system. The system is also powerful enough to adjust pitch and lerp over the given fade time. With this tool you not only can connect your mix system to user-pref panels or a master volume slider, you can hook up dynamic mix scenes that are intimately tied to gameplay (dynamic cross-fading of sounds based on vehicle throttle, health-level, etc).

Thanks ! I’ll work on implementing that.

So I tried, and while the static Sound Mix asset does work (I can edit the asset during gameplay and the volume is updated), I can’t get the override working from code.

FAudioDevice* AudioDevice = GEngine->GetMainAudioDevice();

AudioDevice->SetSoundMixClassOverride(MasterSoundMix, MasterSoundClass, MasterVolume, 1.0f, 0.5f, true);

I checked that :

  • my Sound Class is used in every sound in the game (it’s actually just from /Engine/EngineSounds/Master )
  • my Sound Mix asset references my Sound Class
  • like I said, updating values of the sound mix in the asset does change the volume.

Doesn’t seem that hard so I must be doing something wrong :confused:

Yo Minus_Kelvin, I solved the error with ParseNodes. I’m just not going to use SoundNodes in the near future with your suggestion. An official example of USoundWaveProcedural being implemented would be awesome, so I look forward to the tutorials you mentioned above.

My new question is your mention of controlling the duration of the sound. AFAICT my only real option is to track and ensure the number of bytes generated by my SoundWaveProcedural class? (I would like a mid-play pause option or interrupt for some audio streams)

Edit: Or to delete the component and respawn it, but that sounds silly to me. I’d much rather not do that.

Editx2: Okay, so I’ve got it all set up, it’s not crashing any more, but I’ve still gots ome issues. It’s failing on line 486 in XAudio2Source.cpp line 486: XAudio2Buffer = FXAudio2SoundBuffer::Init(InWaveInstance->ActiveSound->AudioDevice, InWaveInstance->WaveData, InWaveInstance->StartTime > 0.f);

Returns null.

Fade Editx3: Found the error. Do not forget to set the number of channels in your USoundProceduralWave. Now the real fun begins of playing with the actual audio streams.

Hey man, I’ve got some feed back here on NumSamplesToGeneratePerCallback. If I have a sample rate of 44100, and I wanted to generate 1 second of data I’ll have to queue audio in chunks the size of NumSamplesToGeneratePerCallback. If I want to put 44100 as NumSamplesToGeneratePerCallback, I can’t. There seems to bee a glass limit around ~8200 samples at the highest I can go. Is this intentional? Because at 8000 samples or so, that’s not enough to hear if I call less than once every 1/5 second.

I’m doing some more testing right now, but It’s pretty confirmed that there is a limit on the NumSamplesPerCallback I can set it to.

This part works fine, but… how do I get volume of sound class modified this way? Can’t find function like “Get Sound Mix Class Override” in 4.12 blueprints. Or I’m missing something :wink:
I wanted to use it in order to display current sound class volume in Audio Settings panel :slight_smile:

I don’t think you’re understanding what’s going on here… So the cap is there because the system is actually not allocating a new buffer each callback so 8k sample max is there because there’s a max buffer size already allocated in the lower level.

It’s also intended for real-time (procedural) generation of audio data. If you want to generate a precise 1 second of audio data only all at once, feel free to do that! But you’ll have to feed it the procedural audio callback in small chunks due to the way the whole system is designed. It’s not designed to allow you to feed one big chunk of data all at once and call it day.

You’ll need to track the current sample read position against the total number of samples generated and copy only the amount of data the callback wants.

I don’t understand what you’re trying to do. You don’t call the callback, the playing sound wave does.

Yeah, I ran into this issue too.

I’m going to post some code since I’m 99% sure this is an ok usage since it’s not technically engine code and doesn’t live anywhere. I’m writing it here in the code block.

So my constructor looks like this:

UMySoundWaveProcedural::UMySoundWaveProcedural(const FObjectInitializer& ObjectInitializer)
	: Super(ObjectInitializer)
	// Set protected variables in base class that need to be set for procedural sounds
	NumChannels = 1;
	bCanProcessAsync = true;
	bLooping = false;
	SampleRate = 44100;

	// Initialize the oscillator to 440.0f

	// Bind the GenerateData callback with FOnSoundWaveProceduralUnderflow object
	OnSoundWaveProceduralUnderflow = FOnSoundWaveProceduralUnderflow::CreateUObject(this, &UMySoundWaveProcedural::GenerateData);

Note that I’m indicating the number of channels, that it can run async, and that it has an infinite duration. I actually haven’t played yet with setting the Duration to anything other than infinite, so come to think of it, it might work to set that to 1.0 second if you want your procedural sound to only last one second. I’ll have to play with that.

Note too the sinusoidal oscillator and hooking up the callback/delegate. My class is actually NOT overriding anything in the base class (USoundWaveProcedural) including Parse or anything else. No need to.

In my GenerateData function (which was hooked up to the delegate callback) I have:

void UMySoundWaveProcedural::GenerateData(USoundWaveProcedural* InProceduralWave, int32 SamplesRequested)
	const int32 QueuedSamples = GetAvailableAudioByteCount() / sizeof(int16);
	const int32 SamplesNeeded = SamplesRequested - QueuedSamples;


	for (int32 i = 0; i < SamplesNeeded; ++i)
		float SampleValueFloat = SineOsc.NextSample();
		int16 SampleValue = (int16)(32767.0f * SampleValueFloat);

	// Now call the audio queue to queue up some random data
	InProceduralWave->QueueAudio((uint8*)SampleData.GetData(), SampleData.Num() * sizeof(int16));


This code could be optimized by removing the SampleData TArray and just writing the output directly to the USoundWaveProcedural::QueueAudio function but I wrote it that way to keep it simple and separate out the float-audio sample data and 16-bit PCM audio buffer and the raw byte output buffer. The final step is a bit annoying that you have to QueueAudio as raw byte data. I’d MUCH prefer to be able to allow you to queue float data and handle the conversion to 16-bit PCM internally. It is on my backlog to make doing all this much easier. Procedural sound wave’s were originally made to support feeding raw PCM audio data directly to voices (for VOIP or audio codecs from video playback, etc) and most low-level audio systems deal with raw PCM data rather than float buffers (which is obviously preferable for DSP).

Assuming you know how to write a decent sinusoidal audio generator, this is literally all you really need to get a sine-tone working in a procedural sound wave.

The best way to actually HEAR the thing, in my opinion, is to create a wrapping component that wraps your procedural sound wave and allows you to interface with it. This is pretty much similar to how an “AudioComponent” wraps a USoundBase* type. In this case your component will wrap your specific procedural sound wave type. Then once you have a component created, you just need to make a nice BP api to interact with it.

Here’s the simple component that wraps the procedural sound wave:

UCLASS(ClassGroup = SoundWaveProcedural, meta = (BlueprintSpawnableComponent))
class MYPROJECT_API UMySoundWaveProceduralComponent : public UActorComponent


	/** Sets and then plays the sine wave frequency to the given frequency in hertz */
	UFUNCTION(BlueprintCallable, Category = "Components|Audio", meta = (Keywords = "ProceduralSound"))
	void PlaySineWaveFrequency(float InFrequency = 440.0f);

	/** Attenuation settings to use for the procedural sound wave */
	UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Attenuation)
	class USoundAttenuation* AttenuationSettings;

	/* The procedural sound wave to use */
	UMySoundWaveProcedural* MySoundWaveProcedural;

Note you can add other audio objects you might want to use with your thing. Every audio engine feature should work with procedural sound waves (e.g. AttenuationSettings, Concurrency, etc). Also persnickity here is the use of the UPROPERTY(Instanced) on the UMySoundWaveProcedural type member variable. That’s important for the UE4 property serializer and how to treat it in the context of the rest of the data. You’ll get mysterious errors if you don’t use that.

Here’s my PlaySineWaveFrequency function which actually creates a new instance of the procedural sound wave and then plays it using the normal BP static function for playing a sound.

void UMySoundWaveProceduralComponent::PlaySineWaveFrequency(float InFrequency)
	// Get component's owner
	AActor* Owner = GetOwner();

	// Create the procedural sound wave
	MySoundWaveProcedural = NewObject<UMySoundWaveProcedural>(Owner->GetLevel(), TEXT("SoundWaveProcedural"));

	// Set the sinusoidal oscillator frequency

	UGameplayStatics::PlaySoundAtLocation(GetWorld(), MySoundWaveProcedural, Owner->GetActorLocation(), 0.25f, 1.f, 0.0f, AttenuationSettings);

Note that I am getting the component’s owner, then getting the owner’s level and using that as the outer for the NewObject function. There’s some funny business with “outer” package ownership semantics and if you don’t get it right, there will be issues with the garbage collector. This is what I found to be the best/easiest to use though there are probably other ways to deal with it.

Also note that I am using the PlaySoundAtLocation API call which is basically a “one-shot” type call so that once it plays in space, you’re not going to be able to move it around. If you want to attach the procedural sound to something or control its 3d position in the audio engine you should use SpawnSound* flavored functions, which generate an audio component. You should be able to use the audio component like you would for any normal use-case, e.g. change volume, pitch, stop, start, etc.

Ok, so as for your other questions – like terminating the sound, etc. Procedural sound waves are obviously more complex than normal sound waves since the lower level audio engine can’t know the duration. You’ll have to manage it yourself. For example, if your procedural sound wave is only supposed to last 1 second, you’re going to have to keep track of how much audio you’ve generated (count samples and divide by sample rate!) and stop feeding out audio if you’re done. And put the thing into a state which allows you to stop the sound outside of the procedural sound wave. For example, add an API call to your component that queries your sound wave if its done (e.g. IsDone()). Then have another function which stops procedural sound components which are done.

When I get a chance, I’d like to implement ways to make procedural sound waves more easy to utilize so this was a productive thread. Thanks.

Let me know what cool stuff you come up with! The possibilities are endless…

How are you testing this? The code you used to get the audio device is not going to work if you are testing in PIE. You need to get the audio device associated with the PIE world you’re using. The “main audio device” is actually the first audio device loaded when in the editor. Each PIE is going to create a new audio device.

To get the proper audio device, you need to get the proper world. Check out UGameplayStatics::PlaySound2D, etc.

UWorld* ThisWorld = GEngine->GetWorldFromContextObject(WorldContextObject);
if (FAudioDevice* AudioDevice = ThisWorld->GetAudioDevice())

Note that it is possible for a world not to have an audio device (running server code, -nosound, etc) so you should probably check if it exists before calling functions on it.

Unfortunately, this is not set up. I’d just store the value you’re setting it to (from the slider or loaded from preferences ini file or something) and then display that.

Thank you for your responses, they are really helping me as I try to learn UE4’s audio system! I will play around with the code you posted to try and get what I want up and running.

So, I start my sample almost exactly the same way you do from my AudioComponent. The problem I was having was that my sound only played once, for the 8k samples, and then quit. I have to keep calling “PlaySoundAtLocation” over and over again in order to get the callback to keep going.

For sure! I’m super excited.

Edit: One more question, may I ask the type of variable SineOsc is? FSineOsc from here: “UnrealAudioTestGenerators.h” ?

Ahh, yeah… I’m going to guess that the sound was getting stopped due to garbage collection! I.e. you’re creating a new object but there was no clear owner of the object and the GC was thinking it needed to be cleaned up. Make sure your audio component is storing the procedural sound wave as a transient property. You definitely don’t need to keep replaying it! It’ll stay on until you specifically say to stop it (or I think if the duration elapses, which I haven’t tested yet in the context of procedural sound waves).

Here’s the full interface of the procedural sound wave.

class MYGAME_API UMySoundWaveProcedural : public USoundWaveProcedural

	// Function which gets called when new audio is required to feed the playing voice instance during playback
	void GenerateData(USoundWaveProcedural* InProceduralWave, int32 SamplesRequired);

	// Raw procedural sample data (signed PCM data)
	TArray<int16> SampleData;
	// Simple sinusoidal oscillator
	FSineOsc SineOsc;

FSineOsc is just a simple little class that returns a float when you call GetNextSample().

class FSineOsc

	// Sets the frequency of the oscillator
	void SetFrequency(float InFrequency);

	// Returns the next sample value (between -1.0 and 1.0)
	float NextSample();

	void UpdateFrequency();

	// Critical section to protect setting the target frequency
	FCriticalSection CritSect;
	float TargetFrequency;

	float Frequency;
	float Phase;
	float PhaseDelta;
	float PhaseDeltaDelta;
	float TargetPhaseDelta;
	bool bNewValue;

static float WrapTwoPi(float Value)
	while (Value > 2.0f*PI)
		Value -= 2.0f*PI;

	while (Value < 0)
		Value += 2.0f*PI;
	return Value;

	: TargetFrequency(0.0f)
	, Frequency(0.0f)
	, Phase(0.0f)
	, PhaseDelta(0.0f)
	, TargetPhaseDelta(0.0f)
	, bNewValue(false)


void FSineOsc::SetFrequency(float InFrequency)
	// Set the target frequency value from game thread (safely)
	FScopeLock Lock(&CritSect);
	TargetFrequency = InFrequency;

void FSineOsc::UpdateFrequency()
	float NewTargetFrequency = 0.0f;

	// Read the set frequency value
		FScopeLock Lock(&CritSect);
		NewTargetFrequency = TargetFrequency;

	if (Frequency != NewTargetFrequency)
		// If this is the first frequency value
		bool bIsInit = (Frequency == 0.0f);

		// Update target frequency
		Frequency = NewTargetFrequency;

		if (bIsInit)
			PhaseDelta = 2.0f * PI * Frequency / 44100;
			TargetPhaseDelta = PhaseDelta;
			PhaseDeltaDelta = 0.0f;
			Phase = 0.0f;
			bNewValue = false;
			TargetPhaseDelta = 2.0f * PI * Frequency / 44100;
			PhaseDeltaDelta = (TargetPhaseDelta - PhaseDelta) / 100.0f;
			bNewValue = true;

float FSineOsc::NextSample()
	// Called on GenerateData function, which is called on audio thread

	Phase += PhaseDelta;

	if (bNewValue)
		if (FMath::Abs<float>(PhaseDelta - TargetPhaseDelta) < 0.00001f)
			PhaseDelta = TargetPhaseDelta;
			bNewValue = false;
			PhaseDelta += PhaseDeltaDelta;

	Phase = WrapTwoPi(Phase);
	return FMath::Sin(Phase);

HA! That was exactly it! Now I can play with audio the way I want- Super stoked now!

Edit: One last question. If I wanted to give my audio component access to the options from the SoundCue BP Editor, would I have to create a C++ class inheriting SoundCue? Right now this is enough, but I would like access to SoundCue BP style of soundwave modulations in a future release. That is, unless the SoundCue results are precomputed, then I get not having that access from a USoundWaveProcedural.

SpawnSound*, gotcha.