Is there anyway I can input my audio signal from my audio interface or builtin mic?
This might be usefull https://answers.unrealengine.com/questions/347976/basic-microphone-input-with-ue4.html
That, and a fairly recent addition was the overhaul of the media framework which is now supports capture (input) with device enumeration (select input source).
https://docs.unrealengine.com/latest/INT/Engine/MediaFramework/ You can set this up in blueprints too, and play the audio with spatializations etc.
Media framework does capture?
I added a new Audio Capture plugin for 4.19 (should be previewable right now) that lets you get audio from a new “mic component” object. It’s pretty early/experimental but it’s the building blocks you need to do some cool stuff with plugins if you wanted to follow what that is and build from it. This is the sort of thing which is obviously going to have a ton of potential feature creep and I didn’t have much time to work on it!
I implemented it quickly for an internal project that was interested in driving gameplay from envelope-following mic input. So along with this, I added the ability to get BP delegates for audio components (when you are using the audio mixer, of course!). So with the mic component, just setup the BP delegates and you’ll get smoothly interpolated envelope-followed amplitude from the mic into your game.
The cool thing is that it’s implemented like a synth component and you can play the mic audio like any other audio source. That means you can apply source effects, spatialize the output, do whatever. So if you have some crazy idea that you want to do with the mic input stream (or make new assets or something insane), just make a source effect and drop it onto the mic component. There’s a lot of stuff you could do.
Here’s a link to the audio capture plugin source on github –
Keep in mind I only did the mic device backend for PC, so if you want to use other platforms/devices you’ll need to implement it for those platforms, but if you know C++, it should be doable. I used RtAudio for the backend which presumably would work on Mac/Linux without much effort but I didn’t have time to test on those platforms.
It does, by using the WMF Media Player (2.0), and setting up a stream media source asset, a media sound component can be instantiated in a blueprint in order to provide the capture via selecting the sms asset. It is also possible to choose between different audio capture sources. Unless i’m mistaken here on the concepts the answer is yes.
I also have took your recommendation and implemented the RtAudio class on my own (using the copy you have provided earlier with the Editor only plugin) and after a little playing around it was possible to expose great many features of this class, which included the enumeration (probing) of the capture sources as well as multiple bit depths for higher quality sources.
Actually, this new audio capture module would also benefit of such options, where we can select the capture source instead of relying on the default device which is not reports the microphone but the line input unfortunately! Not to mention it won’t take care of the VR audio inputs either which can be a pain as well.
The media framework can help here a bit to fetch the list of devices that is blueprint compatible already. While it doesn’t specifically tells which is the default mic input (a missing feature?), but at least the developer would be able to set up a widget for the user to choose between devices. The RtAudio can then parse this input string to select the preferred input for capture.
Will you accept PRs for this plugin?
EDIT / SOLVED:
I forgot to enable the new audio mixer in the WindowsEngine.ini located at <location of ue4.19>\Engine\Config\Windows.
Yeah, I’ll accept PRs for anything I mean, “accept”, I’ll look at em.
I wrote the Mic Capture Component in a day very quickly for an internal project that wanted to drive gameplay with microphone input via envelope following. There are obviously a ton of things we can do with it. Despite it being a bit thin on flexibility and features, I opted to ship with it since it does work as is but I knew it’d generate a lot of interest and feature requests.
“I forgot to enable the new audio mixer in the WindowsEngine.ini located at <location of ue4.19>\Engine\Config\Windows.”
Where exactly? I could not find it in WindowsEngine.ini"
You can see it there, the top 3-4 lines. AudioDeviceModuleName, twice. It’s set to XAudio2 now, you want to switch it to AudioMixerXAudio2. Semicolon at start of lines defines what’s disabled. Also, check out the Quick-start sticky in this forum, as that has explicit instructions for just this.
Hi, I’m really interested in this new Audio Capture plugin, I’ve been able to use the envelope value in my project to change some parameters, but I’d like to be able to choose the audio source, and possibly use several audio sources at once. I dont really understand how something like the “enumerate audio capture” node could be used for this? I’m just a beginner in UE4, so I dont really understand how you implemented the RTAudio Class, and what it’s used for. Can you help me?
Hi. I got this to work but the latency is very bad, is there a way to work around this?
You can experiment with buffer settings. https://github.com/EpicGames/UnrealE…pture.cpp#L148 And look into capture frequency as well to make sure you grab the packets in time.
Generally speaking, lower the buffer size the lower the latency will becomes, but this is only a layer for directsound which is never going to produce low latencies for you. That’d require a different audio capture / rendering method, and in many cases special hardware, such as a studio grade sound card, stuffed directly into a shiny PCI-E slot.
Audio is rather similar to video playback, as it always require high performance to keep it in sync with whatever medium or event triggered it. The problems for video is solved by GPU / graphics accelerator, that is a reliable hardware to produce this content in close to realtime. But what about the audio? This is a key issue that no audio accelerators available in consumer pcs - not really - the one you call sound card integrated on your motherboard is a very cheap quality equipment not designed for low latency works.
But of course you can have an expensive sound card in your computer, and use advanced audio rendering techniques, such as ASIO, WASAPI, Kernel Streaming to allow low latency playback /o record. These however start to point out of the context of gaming, since the CPU costs of such high performant use cases can easily get in the way of gameplay, and don’t underestimate the compatibility issues with average user rigs.
Directsound should be considered the worse of all - in the context of record / playback latencies - however it is the most reliable, highest compatibility and cpu cost efficient method to produce audio which is in favor of many game specific reasons. And it is duplex (parallel in/out) that will easily choose playback over recording when it comes for sharing the resources. No luck here with low latency recording.
The RtAudio lib this component is based on - actually packed with many different playback solutions as well, it just not implemented in the engine as of yet. GitHub - thestk/rtaudio: A set of C++ classes that provide a common API for realtime audio input/output across Linux (native ALSA, JACK, PulseAudio and OSS), Macintosh OS X (CoreAudio and JACK), and Windows (DirectSound, ASIO, and WASAPI) operating systems.
RtAudio lib has a convinient api to list and select audio devices for input/output/duplex reasons. It will require a bit of a coding to allow this plugin to use a device the user can choose. You should be aware that VR equipment is supported by RtAudio but not implemented as a general feature, therefore any change in implementation should take this into consideration and choose the device automatically when it is required/allowed.
The easiest way for you to choose input is you go into windows audio recording settings and set the default input device as preferred. If you want to expose a setting for this in game the extension of this component is likely becomes necessary.
Selection of capture device happens here: https://github.com/EpicGames/UnrealEngine/blob/4.19/Engine/Plugins/Runtime/AudioCapture/Source/AudioCapture/Private/AudioCaptureRtAudio.cpp#L72 And it is using the default device the rtaudio lib will report. It’s not implements the communication device, thus only reports the default line input device instead.
Hi, i am trying to make it work, to have an audio in level from my microphone, and use the audio visualization tools … anyway to do that? … just using blueprint.
What would you recommend for implementing this backend for mobile devices, both iOS and Android? As far as I can see, there’s no RtAudio implementation for mobile.
just asking, for the Audio capture, when this will be possible for final build on different platforms?
Looks like 4.24 opened it up to other platforms!
I’m wondering if the plugin will support grabbing the microphone stream as bytes to be used in things like a speech-to-text engine.