Input => synth latency with new audio engine

Hi guys,

Great work with the new synthesis tools in the new audio engine. We’re having good fun with these.

I’m curious as to what kinds of latencies I should be expecting from a simple keystroke triggering a ModularSynth NoteOn and NoteOff? After mapping a few keys to separate Synths and “playing” them quite quickly I’m feeling latencies beyond durations commonly acceptable for real-time instrumental-style performance.

  • UE 4.18.3 using super simple Blueprint on a blank test environment.
  • Super fast Falcon Tiki w/loads of ram and a Titan X card (I’m too lazy to look up the full specs).
  • CM Storm Rapid I clicky keyboard

Thanks!

rob

Did your synth preset have instant attach, when testing latency?
Also, FPS matters too.

Don’t know what the actual delay situation is, or if it differs between editor/packaged game, sorry.

Sorry, what do you mean by “instant attach”. Apologies if I’m missing something obvious.

My FPS should be off the charts, Nvidia 16 GB Titan X G-sync’d at 144 to display.

thanks

Oh wow, I meant “attack”. As in envelope attack on the synth!

It doesn’t matter too much if your FPS is high or low, as long as it’s LOCKED. Higher is still better, as long as you can keep it there. You can lock it in project settings.

Still looking at this. With FPS locked to 60fps (tried 30 as well), and with all SynthPreset values that could/should affect latency 0’d out, I’m still seeing ~150-200ms delay on a key press through to a resultant note being generated. Both on noteOn and noteOff. Here’s a quick video capturing what I’m talking about:

https://rpi.box.com/s/a0o5lwrclr2m962qpbhtlcdth4d79kmz

Synth Preset values:

AttackTime 0.0
DecayTime 0.0
ReleaseTime 0.0
Legato 0
Retrigger 1
StereoDelayEnabled 0

Any thoughts? Perhaps this is related to the timing discrepencies between Game and Audio/AudioRendering threads?

This has got Ethan Geller’s name all over it :wink:

We can adjust the callback buffer size in project settings/platforms/yourPlatform, lower it for shorter latency, just remember that can cause processing trouble under load.

In addition to the buffer which is ~23ms on default, we have to take into account input latency for your keyboard/controller to your computer, your sound output hardware+speaker setup, your FPS, if it’s through the editor in BPs or a packaged game…and more.

Yes, the primary question of this thread is what are the throughput latencies one should be expecting from key-press to input event to synth generation to audible output. Running the same processes using ChucK or Pure Data or SuperCollider or Max will result in keystroke to sound latencies well under perceivable thresholds. Thus the question of what we should expect from UE’s new synthesis capabilities is really the question here.

The callback buffer size is something I’ll play with next, in addition, it seems as if there’s a potential in 4.19 for even greater ~33ms reduced latencies?

All of this is part of a determination I’m trying to make as to whether or not Aaron’s new synths are suitable for a new VR instrument my team is building. As with any real-time controlled musical instrument, latencies need to be perceptibly nil for it to feel reactive and responsive.

UPDATE: Looks like Project Settings > Platforms > Windows > Audio > Callback Buffer Size mins out at 512. Setting this, changing Audio Mixer Sampling Rate down to 44100 and 22050, and increasing number of Source Workers had no noticeable effect on latencies. Also built sample projects, again, no noticable effect on throughput latencies.

Yeah, game engine audio latency is always an issue. It’s not optimized for input latency (for performance), but for stability and rendering capability with minimal CPU impact.

Are you using the audio mixer? The synth actually works in the old audio engine, so make sure you know for sure you’re using the audio mixer!

The issue is BP code is executed on the game thread tick (so has a max latency of your FPS due to that), then it hands of a message to the audio thread (which is currently locked to the game thread update tick), then the audio render thread consumes audio thread messages.

The worse-case output latency then due to thread communication is:

33 ms (GT and AT assuming 30 FPS), 23 ms (ART, 1024 frames at 44100) = 56 ms

If you run in the editor (which your video indicates), where the AT doesn’t exist (because we write to UObjects in editor-mode), audio updates happen strictly after the GT update. In this case you’ll get an even worse worst-case latency:

33 ms x 2 + 23 = 89 ms. <- this is super bad of course!

If you run with the AT (launch with -game), and can get your game running at 60 FPS, that’ll cut down the GT/AT latency by half.

If you reduce the framesize to 512, that’ll cut that in half:

16 ms + 12 ms = 28 ms <- this is getting into the realm of acceptability.

Then, you have to account for midi-input latency. Of course, even with low-latent devices, it’ll add to it.

The problem then is jitter. As you know, jitter is ultra important for midi performance and we absolutely do not optimize for reducing jitter. We’d want to add latency and schedule events to make sure they’re consistent.

If you play UE4 games, you’ll find that keyboard input (or controller input) to SFX output is in an acceptable range for games, but probably not for a musical performance.

Thanks Aaron, this is super helpful.

I’ll try to get an example down to that ~30ms range and see how it feels. I’ll make sure I’m using audio mixer; I set my engine .ini to use the new engine but maybe I’m doing something wrong still.

We’ll be driving this eventually with Oculus Touch controllers so MIDI input won’t be part of the equation. Though who knows what cans of worms that will open up.

Thanks again.

Rob

Hi Rob!!!

I agree with everything that Aaron said, but I was wondering if you’ve tried packaging the game, and have noticed a difference in input latency with the packaged version vs. the editor.

Also, Aaron wrote the synthesizer, not me, so theoretically it has his name written all over it :smiley:

Hah, success! This whole thread was just an excuse to draw you out Ethan. :wink:

I did try packaging the game with no noticeable difference, but I’m going to dig in a bit more and make sure I had everything set per Aaron’s suggestions and try again.

VR, so it’ll be at 90fps probably?

VR mainly has available hand motion, buttons and analog/touch axis controllers. Playing virtual drums is less victim of lag, than playing a midi drum-set.
In a VR scenario where there is unavoidable delays, there’s some little tricks I found…
On virtual drums or keys etc, make the triggering collision mesh a little bigger than the visible mesh.
Avoid buttons triggering timing sensitive stuff. If you play stuff by swinging an arm through it in VR, it’s easier to adjust for the player, and the early collision mesh let’s us match up visuals and sound. Analog triggers are very fun to control sound effects.

Hate learning about so many limits, but other than that, great thread!

Sounds good, Rob! Let me know if it helps.
@ArthurBarthur you bring up a super good point with the early collision mesh for drum triggers with motion controls- accompanying the early collision mesh with a velocity check is usually helpful for getting rid of accidental triggers. Luke Dahl’s air drumming paper is super helpful for this kind of stuff: https://ccrma.stanford.edu/~lukedahl/pdfs/Dahl-AirDrummingGestures-CMMR15.pdf

Cool helpful paper! So by reading and predicting nothing but controller motion(in context), we can animate and generate audio for the hand, and what we’re holding…

Yup! Luke’s paper also brings up good points about elbow motion vs. hand motion, which is interesting for stuff like elbow IK for VR motion controllers.

Here’s my setup…

[Read] New Audio Engine: Early Access Quick-Start Guide](New Audio Engine: Quick-Start Guide - Audio - Epic Developer Community Forums)

Project Settings…

  • Engine > General Settings > Framerate > Fixed Frame Rate: 60.0
  • Platforms > Windows > Audio > Audio Mixer Sample Rate: 22050
  • Platforms > Windows > Audio > Callback Buffer Size: 512
  • Platforms > Windows > Audio > Number of Source Workers: 4

< ! > Don’t use Bluetooth devices! (Keyboard, Mouse, Headsets & Speakers, USB Hubs, etc.)
< ! > Don’t use PIE (Play in Editor), package instead!

[Tip] Generally, mouse inputs (clicks) are faster than keyboards.

Read More…

“While working on Battle Royale we identified some issues with input latency in the engine that particularly affected 30Hz games. We were able to make improvements to thread synchronization, reducing latency by around 66ms (the reduction will be around half that in a 60Hz title) to address this problem. These changes make a noticeable improvement to the feel of the game, making it more responsive and easier to aim.” [Link]](Unreal Engine Improvements for Fortnite: Battle Royale - Unreal Engine)

hello! Bumping this thread as i’m having a similar issue with latency, though unfortunately working on a project for live performance rather than VR, so can’t use the solutions suggested above.

I’m experiencing latency between a midi fire from a connected device and the playing of an audio component. Have followed the instructions above to no avail.

I’m wondering if there have been any updates to the AudioMixer that i might be missing out on, or any workarounds people may have found in the meantime?

I’m working with blueprints, but also wondering if using CPP could give me access to changing the input audio driver from DirectSound to something more geared towards reducing latency, like ASIO or WASAPI, or could anyone let me know if there are any ways to do this within BPs?

Thank you!
Jack

I’m in the same exact scenario as you. I found after doing all that was mentioned in this thread the MIDI latency is almost acceptable, but definitely not optimal. I was also wondering if there’s any way to change to ASIO or WASAPI like you mentioned. It most likely won’t be in BP though; from my research I’ve found maybe using WWise or FMod, or implementing RTaudio could be possible solutions, but they might be excessive if there’s a way of using WASAPI or some other form of lowering the latency.

If you want ASIO, I highly recommend just building your own audio engine on the side, and calling it from your Unreal actor code. This of course means you have to build all the file parsing and filter effects and whatnot on your own too, but that’s what it takes if you want to build a DAW / synth-host, rather than a game …