Making a UE Plugin for Audio From Scratch

Introduction and Motivation

Hello friends!

We work a lot with third-party companies who are interested in taking their existing audio plugins that are used in DAW scenarios (VST, etc) and turn them into UE plugins.

Often they are not game developers and are small companies with limited resources. So don’t have a ton of resources to steep themselves in all the Unreal-isms of making plugins.

I’ve had some informal docs floating around to help people get started, but decided to put together an audio-focused quick-start tutorial on making Unreal plugins, linking a static library (your propriety code!), and writing a DSP effect and synthesizer in Unreal Engine.

This is currently an informal doc I made quickly and has not gone through the official docs channel. I hope this is a working draft I can flesh out and make more official in the coming month or so.

Since there’s a lot of interest in making plugins right now (after the buzz of our UE5 announcement earlier this year), I figured it wouldn’t hurt to publish a more informal pre-doc more quickly on our forums. It’ll also serve well to get some feedback on what is confusing and what people would like more details on.

Getting Started

I highly recommend to do the following on Windows. I’ve not personally made an Unreal Plugin from scratch on Mac, but of course it’s possible. It should be nearly the same except how you make your static library (you use XCode vs Visual Studio) and some different configurations in our build system. This is usually how we do our Mac versions of plugins we make that depend on Mac 3rd party libraries. This particular point is sometimes an issue with Audio Plugin developers since Mac is a huge platform for Pro Audio. Unfortunately, in the game space, Windows/PC dominates game development!

I also recommend doing this all in VS2019. I did this tutorial w/ latest VS2019.

And, finally, you should install the UnrealVSplugin for Visual Studio. It does make dealing with UE and Visual Studio easier in a lot of ways although is not strictly necessary for this tutorial.

Although downloading the full UE source code from github and compiling everything from scratch is a great idea, I’ve found that it’s a daunting prospect for a lot of smaller companies and indy devs (or contractors) trying to just make something work quickly. Unless you have a very beefy machine or a distributed compiler (like Incredibuild), it’ll take a while to compile. Maybe 4+ hours. It’s worth having the full source if you want to really get deep into UE development, really grok how the audio engine works, and make github PRs to help us improve our interfaces and help fix bugs.

So instead, just download a binary build from the Unreal Launcher. The Unreal Launcher not only lets you play Fortnite (and many other games from the Epic Games Store), but it lets you download different version of the Unreal Editor, for free.

For this tutorial, I downloaded 4.26 preview 4 (current preview as of this writing) and worked directly from just a game C++ project (which compiles very very quickly).

Once you’ve launched it, just make a new ultra-empty C++ project:

  • New “Game” project
  • Blank
  • C++ Project
  • No Starter Content
  • Everything else default.

Name your project whatever you want, I’ll name ours “MakingAnAudioPlugin”.

The reason we want to make a project is that it’ll serve as a nice place to test our new UE plugin we’re going to make. Also, there’s a Plugin Wizard we’ll use to stub out our new plugin.

At this point, the editor should have popped up and VS2019 will have loaded up the new Solution and it’ll look, hopefully, something like this:

VSNewProject.png

Now, expanding all the source, you’ll see the C++ source files the project creation wizard made for us.

None of it really is relevant to us since we’ll be making a plugin shortly that we’ll put all our code in. But suffice it to say, this is where game-specific code will go. Every game made with UE4 has all their code in these folders.

If you wanted to be very inefficient, you could implement your cool audio plugin stuff in this code directly, including linking to your DLLs and static libraries. The only problem with this approach is, of course, that only this specific game project would be able to use it. I guess that’s the point of plugins, isn’t it?

Making a UE Plugin

So now what you should do is check out all the plugins that came with the binary Editor build by default by going here:

This will load up the plugin browser and allow you to behold all the plugins. UE plugins are very powerful and a huge and deep topic. You can fundamentally extend the editor with new tools, new UI, new widgets, new everything. If you don’t know yet, Unreal Engine has a UI C++ framework called Slate that lets you do a lot of great stuff.

There’s also a Unreal Engine marketplace that features a ton of 3rd party UE plugins. In fact, I recommend you check that out as soon as you get your audio library working.

The audio engine team also makes plugins and puts them in the audio category of this plugin browser. I recommend checking those out! Our general ambition is to put all new major audio engine features into plugins as much as possible.

In any case, the thing to do here besides browse and check out plugins, is to click the “New Plugin” button on the bottom right.

This will bring you to yet another Wizard.

If you want to link to a Dynamic Library, the Wizard has that option ready to go for you.

For this tutorial, lets keep things even simpler and link to a static library ourselves and choose the “Blank” option.

This will enable you to understand a bit more about how to link to your stuff. The process of loading a dynamic library is not that much different.

Clicking create will make the Unreal Engine go off and do a bunch of stuff. Then your visual studio solution will pop up with a “File Modification Detected” warning.

You definitely want to click “Reload All”. This will reload the solution and show you the new C++ files that the plugin wizard made.

This is a common workflow w/ Visual Studio. Generally speaking, we don’t really mess with Visual studio directly. Project and solution configuration is handled entirely automatically by the “Unreal Build Tool”. You should absolutely check more out about this as it’s a powerful tool that allows us to support so many platforms. If you’ve used other build tools (CMAKE, etc), it’ll be familiar to you but once you grok it, it’ll be much easier to work with. It’s in C# and pretty easy to work with.

You should now see this in your game’s C++ project “solution explorer” in Visual Studio.

Note the addition of a Plugins folder and your new Plugin. I called mine, “MyAudioPlugin”.

Now, what’s cool is the still-open Unreal Editor should have hot-loaded the new plugin right away. It SHOULD be listed in the plugins browser now as enabled.

Pretty snazzy!

Making A Static Library

Ok, since you’re already an accomplished audio plugin developer, you probably already have a static (or dynamic) library ready to go.

For the purposes of demonstration, lets assume you don’t have one (or want to make one yourself from scratch and just learn how to do it).

  1. Outside of Unreal Engine, load up Visual Studio 2019 directly.
  2. In the solution browser, select “make a new project”.
  3. Select “Windows”, “Library”, and “Windows desktop wizard” as the options.
  4. Name your project (e.g. “AudioCompanyLibrary”), click “Create”.
  5. In the final window, select “Static Library (.lib)”, and “Empty Project”.

This will open up a totally blank visual studio solution that should be all configure to be a static library.

Add some .h and .cpp files to actually make a library.

I’ve attached my simple demo “AudioCompanyLibrary” that has a FastSin implementation (using Bhaskara I technique from the 7th century), also that you can find in our SignalProcessing library in UE4, a simple volume-scaling processor (classic “hello world” audio effect) and a simple sine-tone generator called uncreatively “Oscillator”.

Here’s the header:



#pragma once

namespace AudioCompanyLibrary
{
       // Sine approximation using Bhaskara I technique discovered in 7th century.
       static float FastSin(float X);

       class Oscillator
       {
       public:
              Oscillator(float InFrequency, float InSampleRate);
              virtual ~Oscillator() {}

              void SetFrequency(float InFrequency);
              float NextSample();

       protected:
              float Frequency = 440.0f;
              float SampleRate = 48000.0f;
              float Phase = 0.0f;
              float PhaseInc = 0.0f;
       };

       class VolumeScale
       {
       public:
              VolumeScale() {}
              VolumeScale(float InVolume) : Volume(InVolume) {}

              virtual ~VolumeScale() {}

              // Sets the volume scale
              void SetVolume(float InVolume);

              // Processes an input audio buffer, writes to given out buffer
              void ProcessAudio(const float* InBuffer, float* OutBuffer, int NumSamples);

              // Processes an input buffer in-place
              void ProcessAudio(float* InOutBuffer, int NumSamples);

       protected:
              // Volume to set the scale
              float Volume = 1.0f;
       };
}


And here’s the .cpp file:



#include "AudioCompanyLibrary.h"
#include <algorithm>
#include <stdlib.h>
#include <limits>

#define _USE_MATH_DEFINES
#include <math.h>

namespace AudioCompanyLibrary
{
       float FastSin(float X)
       {
              // Component used to get negative radians
              const float SafeX = X < 0.0f ? std::min(X, -std::numeric_limits<float>::min()) : std::max(X, std::numeric_limits<float>::min());
              const float Temp = SafeX * SafeX / abs(SafeX);
              const float Numerator = 16.0f * SafeX * ((float)M_PI - Temp);
              const float Denominator = 5.0f * (float)M_PI * (float)M_PI - 4.0f * Temp * ((float)M_PI - Temp);
              return Numerator / Denominator;
       }

       Oscillator::Oscillator(float InFrequency, float InSampleRate)
       {
              if (InSampleRate > 0.0f && InFrequency > 0.0f)
              {
                     Frequency = InFrequency;
                     SampleRate = InSampleRate;
                     PhaseInc = InFrequency / InSampleRate;
              }
       }

       void Oscillator::SetFrequency(float InFrequency)
       {
              if (InFrequency > 0.0f && SampleRate > 0.0f)
              {
                     Frequency = InFrequency;
                     PhaseInc = Frequency / SampleRate;
              }
       }

       float Oscillator::NextSample()
       {
              Phase = (float)fmod(Phase, 1.0f);

              const float Radians = 2.0f * Phase * (float)M_PI - (float)M_PI;
              float Sample = FastSin(-1.0f * Radians);
              Phase += PhaseInc;

              return Sample;
       }

       void VolumeScale::SetVolume(float InVolume)
       {
              Volume = InVolume;
       }

       void VolumeScale::ProcessAudio(const float* InBuffer, float* OutBuffer, int NumSamples)
       {
              for (int SampleIndex = 0; SampleIndex < NumSamples; ++SampleIndex)
              {
                     OutBuffer[SampleIndex] = Volume * InBuffer[SampleIndex];
              }
       }

       void VolumeScale::ProcessAudio(float* InOutBuffer, int NumSamples)
       {
              for (int SampleIndex = 0; SampleIndex < NumSamples; ++SampleIndex)
              {
                     InOutBuffer[SampleIndex] *= Volume;
              }
       }
}


Linking to Our Static Library Part 1: Putting the Library in Folders

So, first make sure you build your static library and get your “product”, which is the “.lib” file. This is the thing you’ll ship with your plugin. Your source code can be hidden and your secret sauce safe from the world. In our case, it’s the 7th century fast sine implementationwe want to keep hidden from the world.

Now, take the .lib file and your header file and bring it back to the Game Project solution where we made our plugin stub.

Create a new directory in your plugin’s source directory named after your library (or whatever you want to name it).

My external static library I made was called “AudioCompanyLibrary”, so I’ll call it that.

The full path should be:

MakingAnAudioPlugin\Plugins\MyAudioPlugin\Source\MyAudioPlugin*AudioCompanyLibrary*

In that folder, make two new subfolders called “Includes” and “Libraries”.

In the Includes folder, put the public include (s) from your library that you want people to use to use your library. In my case, it’s the “AudioCompanyLibrary.h” file.

Then, in the Libraries folder, make a sub-folder for every platform your library supports. In my case, I am only supporting x64, but if you support other platforms which have different library formats, you’ll want to put them behind platform-named subfolders. If you want to see examples of what this looks like for a more complex scenario, see all the bazillion plugins UE ships with.

You can also optionally add one more folder layer to define build configurations (e.g. if you want to ship debug .libs). In case I want to add a Debug or Development build, I added a Release sub-folder. Since I built a Release version of my demo .lib, I put it there.

My whole path looks like:

MakingAnAudioPlugin\Plugins\MyAudioPlugin\Source\MyAudioPlugin\AudioCompanyLibrary\Libraries\x64\Release\AudioCompanyLibrary.lib

Linking to Our Static Library Part 2: Telling the Unreal Build Tool (UBT) about it.

Now, open up the *.Build.cs file of our Unreal Engine plugin (in my case, it’s called MyAudioPlugin.Build.cs).

This file is the configuration file for the Unreal Build Tool. This has a LOT of power. Check out the docs on UBT. For purposes of getting right to, all you need to do to statically link to our library is two things:

  1. Tell UBT about where to look for our header(s)


PublicIncludePaths.AddRange(
       new string] {
           Path.Combine(ModuleDirectory, "AudioCompanyLibrary/Includes")
       }
);


This will make it so our Unreal Engine plugin can do,


#include "AudioCompanyLibrary.h"

without failing to find the file.

Note, to use “Path” as a thing, you’ll need to add “using System.IO;” at the top of the Build.cs file.

  1. Tell UBT to statically link to our .lib file on x64 platforms.


if (Target.Platform == UnrealTargetPlatform.Win64)
{
       PublicAdditionalLibraries.AddRange(
              new string] {
                     Path.Combine(ModuleDirectory, "AudioCompanyLibrary/Libraries/x64/Release/AudioCompanyLibrary.lib")
              }
       );
}


  1. Tell the plugin that we’ll want to depend on the “AudioMixer” since we’re going to be using some classes in that UE module (it’s a separate module from Engine and is where our source effect and synthesis APIs live).


PublicDependencyModuleNames.AddRange(
       new string]
       {
              "Core",
              "AudioMixer",
              // ... add other public dependencies that you statically link with here ...
       }
);


You can get more fancy with this but I just did it straight up and direct. The key thing is “PublicAdditionalLibraries” is an array in UBT that modules can add to.

Obviously, TargetPlatform is the thing to use to load up different libraries depending on platform. Similarly, to check out more complex cases for all the different platforms Unreal Engine supports, by all means, check out the other quntillion plugins for examples (and read the docs).

The full Build.cs file looks like this at this point:



// Your copyright
using System.IO;
using UnrealBuildTool;

public class MyAudioPlugin : ModuleRules
{
    public MyAudioPlugin(ReadOnlyTargetRules Target) : base(Target)
    {
        PCHUsage = ModuleRules.PCHUsageMode.UseExplicitOrSharedPCHs;

       PublicIncludePaths.AddRange(
              new string] {
                     Path.Combine(ModuleDirectory, "AudioCompanyLibrary/Includes")
              }
       );

       PrivateIncludePaths.AddRange(
              new string] {
                     // ... add other private include paths required here ...
              }
       );


       PublicDependencyModuleNames.AddRange(
              new string]
              {
                     "Core",
                     "AudioMixer",
                     // ... add other public dependencies that you statically link with here ...
              }
       );


       PrivateDependencyModuleNames.AddRange(
              new string]
              {
                     "CoreUObject",
                     "Engine",
                     "Slate",
                     "SlateCore",
                     // ... add private dependencies that you statically link with here ...
              }
       );

       if (Target.Platform == UnrealTargetPlatform.Win64)
       {
              PublicAdditionalLibraries.AddRange(
                     new string] {
                            Path.Combine(ModuleDirectory, "AudioCompanyLibrary/Libraries/x64/Release/AudioCompanyLibrary.lib")
                     }
              );
       }

       DynamicallyLoadedModuleNames.AddRange(
              new string]
              {
                     // ... add any modules that your module loads dynamically here ...
              }
       );
   }
}



Generate The Solution

So how do we tell visual studio we did stuff? As I said, we don’t really interact with Visual Studio directly to add new source files or configurations. We let UBT do that.

The way to do it in a project derived from a Binary Build (it’s a bit different if you’re compiling from UE4 Source), is to right-click on the *.uproject file (at the root of the game project you’re working on) and select “Generate Visual Studio Project Files”.

This is the thing to do whenever you want to add files to your project.

Remember to let Visual Studio reload once it prompts.

UBT will show you some errors in a window that pops up if you did anything wrong. It’s usually very helpful, as far as build tools go anyway.

Now Do Stuff With Your Library

At this point, your plugin is statically linking to your library but there’s nothing anybody can do with it.

Here, you should spend some time learning about a variety of important UE4-ism.

Probably the crash-course, top couple of things you should know about are the following:

This little tutorial is not the place to go deep on all of these topics, but I will take it to the point of actually using the simple demo library I made and make a source effect and a synth.

Source Effects And Synthesis Overview

In a nutshell, source effects are DSP effects that apply to individual sound sources. These are opposed to Submix Effects, which apply DSP effects to mixed sources. There’s a difference in the API primarily to deal with a couple of fundamental differences between source-based effects and submix-based effects. Source effects are given a lot of information about the source itself (e.g. position, velocity, volume, pitch etc) while a submix has to deal with a more complex channel configuration scenario (surround sound, sound-fields, etc). I wanted ‘source effects’ to be an easy place to experiment with DSP since it only needs to deal with mono and stereo sources (currently only 1-2 channels are supported in source effects). However, we will probably add the option for source effects to optionally support any number of channels of audio in the future.

UE supports procedural (synthetic) sound sources for real-time audio generation. It does this in async tasks as to reduce the chance for performance impacts on either the audio render thread, the audio thread, or the game thread. They are async background tasks and have the ability to configure their buffer size independently of the audio renderer buffer size. They also take parameters more directly from the game thread for reduced latency.

Using Source Effects

There’s some nuance to this but the best way to figure out what you can do in source effects is to check out all the ones we’ve already made in the “Synthesis” plugin that ships with UE.

Before we make a new one, lets see what source effects we already have.

First, make sure the “Synthes And DSP Effects” plugin is enabled:

This should be enabled as it’s on by default.

With this enabled, to create a source effect in the “Content Browser” of the Editor, simply right-click a blank place in the content browser, go to the “Sounds” category, and then the “Effects” sub-category and click “Source Effect Preset”.

Here you’ll be presented with a little chooser to pick what kind of source effect to make.

Your plugin will automatically add to this list!

For now, just pick one to inspect. For fun, lets pick the Ring Modulator since it’s everyone’s favorite DSP effect.

RingMod.png

Double clicking on the asset (a “settings” asset) you’ll load up the “details panel”, which lets you set the properties of the source effect.

This is the ring-mod settings:

Classic ring mod stuff!

How do we use this though!?

Since we made an empty project, we have no audio assets to test with. Lets import one. I’ll attach a simple .wav (in TestAudio.zip) I made to the post here if you want to use that.

To import it, just drag it into the content browser from your desktop. It’s THAT easy!

You can preview the sound in the Content Browser by just pushing the play button that appears when you hover the mouse over.

Double click it to open the details panel and set it to looping so we don’t have to keep re-triggering it when we mess with the Ring Mod in a second (type “looping” in the top search details entry to find it fast).

To apply our Ring Mod to this sound, we need to make one more asset first, the “Source Effect Chain”. The idea of source effect chains as separate assets is the expectation that sound designers will want to craft an effect-chain once and quickly apply it to many many sounds all at once (likely using our Property Matrix Editor).

To make a Source Effect Chain, it’s similar to how we made the Source Effect Preset asset:

Now double click the Source Effect Preset we made (name it whatever you want) and add a new entry in the “Chain” array. Then drag the Ring Mod Source Effect preset we made onto the Preset slot as follows:

Now, all we have to do is hook up the source effect chain to the sound.

Open back up the sound wave asset we dragged in, find the slot called “Source Effect Chain” (just type “effect” in the details search at top to find it fast):

Now we have the source effect chain hooked up to the sound. When we preview the sound in the content browser, it’ll have it’s audio fed through the Ring Mod source effect!

What’s cool, is we can modify the Ring Mod settings while the sound is playing so you can preview the preset changes in real-time like a DAW.

You can also modify the preset values in Blueprint or C++ to tie the effect to gameplay. We utilize this quite a bit in Fortnite!

Here’s an example of the stereo-delay source effect being driven by gameplay in Fortnite in real-time – Fortnite: The Device - Full Event (No Commentary) - YouTube

Making A Source Effect With Our Library

Turns out, source effects are very easy to make!

Add the following code in the header and cpp files in the Public and Private folders of your plugin. (e.g. AudioCompanySourceEffects.h/.cpp).

Note that the Unreal Engine uses a convention for DLL exports based on the name of the module. E.g. in my case, MYAUDIOPLUGIN_API since my plugin module is called “MyAudioPlugin”.

Also note that the Plugin code we are writing here are DLLs but we’re statically linking the library we made with our DLL.



#pragma once

#include "CoreMinimal.h" // some basic minimal includes (still a lot, but this lets us use U-stuff

#include "Sound/SoundEffectSource.h"  // This is the include to allow us to make a source effect
#include "AudioCompanyLibrary.h" // This is our static library!  We can do this because of our Build.cs changes to PublicIncludePaths

#include "AudioCompanySourceEffects.generated.h" // Long story on this, but the native reflection works by code-generation created by the "Unreal Header Tool" which is included here


// This is a ustruct (not a U-object!) which is contained in a U-asset below (which is a U-object!).
// This uses the power of reflection to generate UI automatically with a bunch of useful properties (ranges, tooltips, etc)
USTRUCT(BlueprintType)
struct FAudioCompanyVolumeScaleSettings
{
        GENERATED_USTRUCT_BODY()

       // The volume scale to use
       UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Settings", meta = (ClampMin = "0.0", ClampMax = "1.0", UIMin = "0.0", UIMax = "1.0"))
       float Volume = 1.0f;
};

// This is the instance of the effect. One is created for every source which is playing with the settings asset
class MYAUDIOPLUGIN_API FAudioCompanyVolumeScale : public FSoundEffectSource
{
public:
       // Called on an audio effect at initialization on main thread before audio processing begins.
       virtual void Init(const FSoundEffectSourceInitData& InitData) override;

       // Called when an audio effect preset is changed
       virtual void OnPresetChanged() override;

       // Process the input block of audio. Called on audio render thread.
       virtual void ProcessAudio(const FSoundEffectSourceInputData& InData, float* OutAudioBufferData) override;

protected:

       // Our third party library object
       AudioCompanyLibrary::VolumeScale VolumeScale;
       int32 NumChannels = 0;
};

UCLASS(ClassGroup = AudioSourceEffect, meta = (BlueprintSpawnableComponent))
class MYAUDIOPLUGIN_API UAudioCompanyVolumeScalePreset : public USoundEffectSourcePreset
{
       GENERATED_BODY()

public:
       // This is a little helper macro to expose a bunch of functions easily
       EFFECT_PRESET_METHODS(AudioCompanyVolumeScale)

       // Can override the preset color in the content browser if you want
       virtual FColor GetPresetColor() const override { return FColor(127.0f, 155.0f, 101.0f); }

       // This makes it so blueprints can modify the effect dynamically
       UFUNCTION(BlueprintCallable, Category = "Settings")
       void SetSettings(const FAudioCompanyVolumeScaleSettings& InSettings);

       // This is the thing which shows up in the content browser to let you expose the settings
       UPROPERTY(EditAnywhere, BlueprintReadOnly, Category = "Settings", meta = (ShowOnlyInnerProperties))
       FAudioCompanyVolumeScaleSettings Settings;
};




This is the cpp file:



#include "AudioCompanySourceEffects.h"

void FAudioCompanyVolumeScale::Init(const FSoundEffectSourceInitData& InitData)
{
       bIsActive = true;
       NumChannels = InitData.NumSourceChannels;
}

void FAudioCompanyVolumeScale::OnPresetChanged()
{
       UAudioCompanyVolumeScalePreset* VolumeScalePreset = CastChecked<UAudioCompanyVolumeScalePreset>(Preset);
       FAudioCompanyVolumeScaleSettings Settings = VolumeScalePreset->GetSettings();;

       VolumeScale.SetVolume(Settings.Volume);
}

void FAudioCompanyVolumeScale::ProcessAudio(const FSoundEffectSourceInputData& InData, float* OutAudioBufferData)
{
       // Check out the InData struct that's passed in to see all the stuff source effects can respond to.
       // For this demo, we're just feeding the audio to our static library for processing
       VolumeScale.ProcessAudio(InData.InputSourceEffectBufferPtr, OutAudioBufferData, InData.NumSamples);
}

void UAudioCompanyVolumeScalePreset::SetSettings(const FAudioCompanyVolumeScaleSettings& InSettings)
{
       UpdateSettings(InSettings);
}



Dropping these files in our plugin code, then regenerating the solution from the .uproject file, then recompiling, we’ll see our new source effect pop up as an option when you go to create a new source effect!

This all works by the power of reflection! When the engine starts up, it scans for all implementations of the U-preset base class and uses reflection to figure out the rest (how to make instances, how to feed audio to it, etc).

Now you can add it to the source effect chain we used before and behold our amazing ability to change the volume of a sound in a totally over-complicated way.

Using Synthesis In UE4

Now that we’ve got our effect working, lets now learn about synthesis.

Synthesis is done using what’s called a “SynthComponent”. Components in general UE4 are a whole thing, but suffice to say, it gives you a handle to a playing sound (e.g. Audio Component) and allows you to easily attach it to other things in a game (e.g. Actors) to get stuff like spatial positions, etc.

You can control components from Blueprint quite easily too! UFunctions defined on them are automatically exposed and things are quite easy.

I highly recommend checking out a Blueprint tutorial to orient yourself with blueprints. There are many!

I will be extremely precise with steps since I know you’re probably wanting to just get on with it as fast as possible.

To check out synthesis in UE4, lets make a new Blueprint Script in the content browser:

  • Right-click in the content browser, select “Blueprint Class”
  • Select “Actor”
  • Name it something
  • Then double click it to open up the Blueprint Editor tool.
  • Click on “Add Component” (green + button top left)
  • Type “Modular Synth” to add a new component of that type (it’s a fully featured subtractive synth!)
  • Click on the “Event Graph” tab to move to the event graph
  • Left-click-drag from the Modular Synth component onto the graph and “drop” it onto the graph
  • Now left-click drag off the blue pin on the right of the oval and let go to see the “contextual menu” of things you can do with this object type
  • Type “start” and select it in the filtered options, then left click (or push enter)
  • This is a “start” function which tells the synth to, well, start.
  • The white input on the left side shaped like a triangle is an “execution pin”. Things connect to this will make this node do something.
  • To make the node do something, left-click-drag from the “Event BeginPlay” node (which should be greyed out in the event graph) and connect it to the Start node.
  • Now left-click-drag off the “Modular Synth” node we used before and let go to see the contextual menu again.
  • Type “note on”, select it, then push enter
  • This is like a midi-note on event (though it’s not midi! note: we do have a midi plugin in UE4 that you totally could hook up to this)
  • Type “60” for note, then type “100” for velocity, leave duration alone (-1 is always on, otherwise you can specify a note duration)
  • Connect this “note on” node to the “start” node’s execution pin.
  • Now, go back to the content browser and “drag” the Blueprint Asset you created into the open empty level in front of you.

Everything should now look like this:

Now when you push “Play In Editor” button (looks like a play button in top) you will play the level and hear the glorious synth tone.

There’s a TON you can do with this, beyond playing a single note, of course. Check out my GDC talk in 2017 for a more interesting demo of synthesis.

Making A Synth Using The Static Library

Here’s the header and cpp files to implement the UE4 synthesis API to utilize our static library’s synthesis API (the 7th century sine tone!).



#pragma once

#include "CoreMinimal.h"
#include "Components/SynthComponent.h"
#include "AudioCompanyLibrary.h"
#include "AudioCompanySynths.generated.h"

class FAudioCompanySineTone : public ISoundGenerator
{
public:
       FAudioCompanySineTone(int32 InSampleRate, int32 InNumChannels, int32 InFrequency, float InVolume);
       virtual ~FAudioCompanySineTone();

       //~ Begin FSoundGenerator
       virtual int32 GetNumChannels() { return NumChannels; };
       virtual int32 OnGenerateAudio(float* OutAudio, int32 NumSamples) override;
       //~ End FSoundGenerator

       void SetFrequency(float InFrequency);
       void SetVolume(float InVolume);

private:
       int32 NumChannels = 2;
       float Volume = 1.0f;
       AudioCompanyLibrary::Oscillator Osc;
};

UCLASS(ClassGroup = Synth, meta = (BlueprintSpawnableComponent))
class MYAUDIOPLUGIN_API USynthComponentAudioCompanySineTone : public USynthComponent
{
       GENERATED_BODY()

       USynthComponentAudioCompanySineTone(const FObjectInitializer& ObjInitializer);
       virtual ~USynthComponentAudioCompanySineTone();

public:
       // The frequency (in hz) of the tone generator.
       UPROPERTY(EditAnywhere, BlueprintReadOnly, Category = "Settings", meta = (ClampMin = "10.0", ClampMax = "20000.0"))
       float Frequency;

       // The linear volume of the tone generator.
       UPROPERTY(EditAnywhere, BlueprintReadOnly, Category = "Settings", meta = (ClampMin = "0.0", ClampMax = "1.0"))
       float Volume;

       // Sets the frequency of the tone generator
       UFUNCTION(BlueprintCallable, Category = "Tone Generator")
       void SetFrequency(float InFrequency);

       // Sets the volume of the tone generator
       UFUNCTION(BlueprintCallable, Category = "Tone Generator")
       void SetVolume(float InVolume);

       virtual ISoundGeneratorPtr CreateSoundGenerator(int32 InSampleRate, int32 InNumChannels) override;

protected:
       // The runtime instance of the sound generator
       ISoundGeneratorPtr Osc;
};


Here’s the cpp implementation:




#include "AudioCompanySynths.h"

FAudioCompanySineTone::FAudioCompanySineTone(int32 InSampleRate, int32 InNumChannels, int32 InFrequency, float InVolume)
{
       NumChannels = InNumChannels;
       Osc = AudioCompanyLibrary::Oscillator(InFrequency, InSampleRate);
       Osc.SetFrequency(InFrequency);
       Volume = InVolume;
}

FAudioCompanySineTone::~FAudioCompanySineTone()
{
}

int32 FAudioCompanySineTone::OnGenerateAudio(float* OutAudio, int32 NumSamples)
{
       check(NumChannels > 0);
       int32 NumFrames = NumSamples / NumChannels;
       int32 SampleIndex = 0;
       for (int32 FrameIndex = 0; FrameIndex < NumFrames; ++FrameIndex)
       {
              float SampleValue = Volume * Osc.NextSample();
              for (int32 ChannelIndex = 0; ChannelIndex < NumChannels; ++ChannelIndex)
              {
                     OutAudio[SampleIndex++] = SampleValue;
              }
       }
       return NumSamples;
}

void FAudioCompanySineTone::SetFrequency(float InFrequency)
{
       SynthCommand([this, InFrequency]()
       {
              // this may zipper, but this is a hello-world style synth
              Osc.SetFrequency(InFrequency);
       });
}

void FAudioCompanySineTone::SetVolume(float InVolume)
{
       SynthCommand([this, InVolume]()
       {
              // this will zipper, but this is a hello-world style synth
              Volume= InVolume;
       });
}

USynthComponentAudioCompanySineTone::USynthComponentAudioCompanySineTone(const FObjectInitializer& ObjInitializer)
       : Super(ObjInitializer)
{
}

USynthComponentAudioCompanySineTone::~USynthComponentAudioCompanySineTone()
{
}

void USynthComponentAudioCompanySineTone::SetFrequency(float InFrequency)
{
       if (Osc.IsValid())
       {
              FAudioCompanySineTone* SineTone = static_cast<FAudioCompanySineTone*>(Osc.Get());
              SineTone->SetFrequency(InFrequency);
       }
}

void USynthComponentAudioCompanySineTone::SetVolume(float InVolume)
{
       if (Osc.IsValid())
       {
              FAudioCompanySineTone* SineTone = static_cast<FAudioCompanySineTone*>(Osc.Get());
              SineTone->SetVolume(InVolume);
       }
}

ISoundGeneratorPtr USynthComponentAudioCompanySineTone::CreateSoundGenerator(int32 InSampleRate, int32 InNumChannels)
{
       return Osc = ISoundGeneratorPtr(new FAudioCompanySineTone(InSampleRate, InNumChannels, Frequency, Volume));
}



The key thing to note here is that the USynthComponent implements the virtual function, “CreateSoundGenerator”. This function is called when the synth component starts by the audio renderer.

We shifted our API recently to this method, which basically results in the synth component U-Object being a factory for creating the runtime instance of the synth. After lots of suffering, we realized there is literally no way to safely deal with U-Objects from the audio render thread (or async generation tasks). The synths generate their audio from async generation tasks (and thus is extremely parallized) but because UObjects are garbage collected (specifically they can be marked as “pending kill” at any time), it’s extremely difficult to make sure the U-object isn’t deleted from under the rendering async task. By making a runtime instance that is not a u-object, we can more safely manage ownership and lifetime.

The unfortunate downside to this is the API for synthesis has grown to be more complicated than it used to be! I haven’t yet ported over all the old synths to the new method so feel free to check out the code for the synth I demo’d earlier (EpicSynth1Component.h/.cpp).

Also important to note is that when exposing Blueprint functions to control the synth, you need to be extremely careful about thread safety. I have implemented a utility to make it easier. The utility is a “SynthCommand” function that lets you stuff a lambda function into a thread-safe queue which is pumped before the generation callback on the correct thread (which is an async task). This way you don’t have to worry much about thread safety. You STILL do have to worry though. Don’t pass UObjects over, for example! That would defeat the whole point of the ISoundGenerator pattern.

By the way! A pro-tip on finding code from the editor. Most things you can jump to the code directly into Visual Studio.

FindingCode.png

To use the new synth we made, it’s very similar to the ModularSynth we did before.

Here’s a BP script utilizing it with a random pitch and volume :

Conclusion

Hope this was helpful to jump straight into making a UE plugin for audio. To make the most of UE and audio though, I highly recommend digging deeper.

I’ve attached a game project (MyGame.zip) with the .lib statically linked and some demos with it. Extract it then generate the visual studio solution, then build it. I removed all the intermediate products (which get put in the Intermedia subfolder) and built binaries from it to avoid a huge file.

Further topics are:

  • Implementing a Submix Effect
  • Implementing a Soundfield Submix Effect
  • Implementing a spatial audio plugin (e.g. for HRTF/Binaural rendering)
  • Custom UI using Slate (and UMG)

This is an informal tutorial posted here in the spirit of faster dissemination of information as I am getting a lot of interest from 3rd party audio plugin companies!

I hope to formalize this in the coming months. It’ll likely become a lot more professional as it makes it through the docs team so treat this as a pre-doc!

15 Likes

Thanks, nice!!! So glad you made this :slight_smile:

1 Like

Thank you @Minus_Kelvin this will come in handy, but can you tell me if anyone is or has created a ACN Ambisonic B-Format 4-channel WAV renderer based upon multiple listener rotations? I have done this manually in sequencer repeating the same sequence three times to capture the camera/listener rotation at 0:0:X for L/R, 0:0:X+90 for F/B and 90:0:X for U/D then bring these into a DAW or NLE to create W/X/Y/Z channels by summing and matrixing to create three M-S channels for X/Y/Z and one W channel. Here is a test sample of the results: https://youtu.be/JtG6Gpiprkg

It would be great to be able to record the three listener positions simultaneously to create the three WAV files. Even better is to capture the three listener relative rotations and run them through a function to create the four ACN Ambisonic B channels WAV channels, or even better multiplexed and meta tagged into one 4-channel AmbiX WAV. Even just having four outputs to feed into mult-channel USB based mixer/recorder would work. Sadly, I am coming up with very little on multi-listener to multi channel recording. The latter seems like it would be more of priority to go along with virtual production capability.

Thanks for doing this - really helpful, both for creating audio plugins and seeing how the new thread safe method for generating audio works.

If anyone is following this there are a couple of thing to watch out for in the code above. First in the build.cs the code shown has ‘new string]’ but should be ‘new string[]’. And in the AudioCompanySynths.cpp code the Osc = AudioCompanyLibrary::Oscillator(InFrequency, InSampleRate); line calls the constructor with (int32,int32) but the constuctor code in the AudioCompanyLibrary is (float,float) so that needs a little adjusting.

Thanks again - UE4 is a great platform for developing procedural audio and you have developed a very useful and thoughtful platform.

2 Likes

guys how r yall doing … i have an issue with the sequencer audio scrubbing … it plays the audio file but when i scrub for lip syncing … it doesnt work … i have the ue 4.26.2… any one knows anything about this issue please save my life :slightly_frowning_face:

Ok, I just did this tutorial today, a few notes:

  1. This is excellent and exactly the info I was looking for! (I wanted to know how to stream a custom audio buffer with pcm samples, I’ll work off the synth example for that)

  2. I was able to follow along and successfully create the sample app with the source effect and the synth, sounded great.

  3. The int32 vs float constructer issue that SandyBeachSystem previously mentioned hasn’t been fixed in the posted code at this time.

  4. I also got another error, it can’t instantiate “Oscillator Osc;” in FAudioCompanySineTone because Oscillator doesn’t also have a constructer with no parms. (If anybody needs help with that, add “Oscillator() {};” under “public:” in the Oscillator class.

  5. I couldn’t find any links to download the mentioned .wav or finished app zip, maybe I’m dumb, dunno.

  6. If there is a “final” or updated version somewhere else, please add a prominent link to it.

2 Likes

There is a good chance that this approach will become redundant with Metasounds in UE5. You can build your own custom nodes already, although it is a bit hacky in early access, but you can see all of the classes are ready for implementation in the future to do it properly. Native Metasounds seems to do do most things and the ability to write custom nodes allows anything “special” to be done without having to write tons of code. Also it will provide better integration into the game engine (through C++ or Blueprints). Something to look forward to in 2022 :slight_smile:

Yep, I recently tried to load this project with UE5 and yep, the audio stuff was not happy. Will delve into the new way to play a streaming custom audio buffer soon I guess, sigh!

Thank you for this tutorial!
May I take this opportunity to ask you a couple of questions?
Is it be possible to use a SynthComponent in a SoundCue?
I am a total newbie and it looks to me that the only sound sources available in the SoundCue Editor are sound files and META objects…

I also figured out that you can apply any kind of post processing effect to the Synth directly from the Details tab of the component, and this is quite amazing. But for example, when I apply the “attenuation”, I don’t see the the influence sphere around the instantiated object.

Is it because the Synth is a “subcomponent” of “Default Scene Root”? (Does the Synth have to be a subcomponent?) Or is there a property that you should declare in C++?

Thank you very much!