Download

[Feature Request] Sample accurate timer event

Hi,

I’d like to request a sample accurate timing event that is exposed to BP’s.

I consider this an important feature for audio based programs developed within UE4.

I hope you take this into consideration.

Regards,

ULLS

I’ve basically managed to write something that kind of works by creating a new USynthComponent class and creating a dynamic multicast delegate which I broadcast from within the OnGenerateAudio() method. This appears to work great when simply printing a string from the event node in the blueprint, however when I use it to trigger other synth components, the engine crashes and I weep a little…

I will need to install the debug symbols to understand the crash but I’m assuming it is a threading issue?

In any case, does anyone have any clues on how I can get rock solid timing for triggering audio in UE4?

Regards,

ULLS

The only thing that comes to mind is running your own separate thread, but I am not sure how happy the audio engine would be if you were to fire audio events off of the main thread. This is just a stab in the dark, I might be completely off-base.

Thanks DamirH.

I’m in luck, coincidentally, one of the C++ dev’s at my work is giving a talk on threading and thread safety in C++ today so hopefully I’ll learn something and will be able to get it working tonight :slight_smile:

:slight_smile:

Yeah, this is a very very bad idea. One of the fundamental reasons you can’t get “sample accurate” timing in BP (without a fancy “scheduling” mechanism of some sort) is that BP executes on the game thread at a much slower tick-frequency than the audio thread. The audio thread is also not updating at “sample accurate” rates either. It’s updated at a rate equal to the sample rate divided by the size of the callback block (e.g. 48000 / 1024 frames) or roughly every 23 ms.

OnGenerateAudio() is called from the audio render thread (or a worker thread owned by the render thread). Any functions called from this function into BP (or any other game-thread-only code) will definitely cause crashes. Only do audio render thread-safe operations from this function – i.e. DSP/synthesis. Any parameters which are set from the game thread need to safely queue using some thread-safe mechanism. Look at EpicSynth1Component.cpp to see examples of how this is done. E.g. SynthCommand functions take a lambda as an argument and then queue the lambda to be actually executed on the audio render thread.

Getting audio render thread information BACK to the game thread needs to use a similar mechanism – i.e. you need to queue the information from the audio render thread to then be consumed on the game thread. Usually use the ::Tick function on a tickable object and TQueue to queue the information. See SourceEffectEnvelopeFollower.cpp for an example of how I did that for the envelope follower source effect (which gets audio-thread information, i.e. the envelope of a sound, and sends that to the game thread for delegate notification in BP).

Thanks, that’s useful information.

I had already explored your envelope follower code and considered going down a similar route, however I realized the broadcast event was occurring on the game thread and therefore the fire rate would vary on different devices which is obviously unacceptable for a step sequencer. To be clear, I’m not really interested in returning any data like you are doing in the envelope follower. I just want to fire an event or call a function with high precision. I’m currently using a timer in BP to fire events and it is working great on my desktop machine, however it’s not so great on mobile platforms where the timing intervals are inconsistent especially at high rates.

If you were to make a rock solid step sequencer in UE4 that had high trigger precision both on low and high end devices how would you tackle this?

Regards,

ULLS

Okay, after re-reading your post I can see that you’ve already given me enough clues get something done…Thanks!!

I’ll have to completely re-architect my project but I think I can achieve something similar to what I want…

If I pass in the data I require safely to the USynthComponent, I can then use this and manipulate the buffers directly (i.e. fill buffer with 0’s when the step is not active) It’ll definitely be a lot more work than I anticipated though.

If there’s a simpler way of doing this please do let me know.

Kind regards,

ULLS

Depends on what mobile platforms you’re talking about. Android in general has latency issues. iOS should be more doable. However, please note that the new audio engine isn’t really well supported on mobile yet!

In general, game-thread timers should be sufficient for most sequencing applications, though if you’re doing a ton of tripple bass-drum beats for hard-core death metal music, it might be best to do that with something else. Definitely not an “easy” project.

Audio programming, especially when dealing with thread synchronization issues, is always tricky.

This stuff is rather over my head… But looking at it from a different angle: is there a way we could schedule/queue precisely timed audio events from the game thread? For example, I used Unity’s AudioSource.PlayScheduled to make absolutely precision-timed piano tones queue ahead of time to coincide exactly with the delay of previously played tones in this unfinished game:

https://youtube.com/watch?v=PaUJVuKitz4

I’d read somewhere (sorry I can’t find references atm) that the brain will forgive/overlook slight mistimings of visual information, while audio mistimings will be much more jarring. So even though player movement in the above example would be unpredictable and render frames would often not fall precisely on a musical beat (especially on mobile), I could time the movement animation to “land” on its final step roughly on a beat, but more importantly, I could queue the “landing” tone to play exactly on the beat closest to when the upcoming landing would occur.

Sorry, I don’t know if this would solve OP’s problem, but might be a way to work around latency of game thread yet maintain precision timing of audio. It’d be a tradeoff of reduced responsiveness to player input, since you’ll be scheduling whatever amount of frames/milliseconds ahead of time you’d need to avoid “dropping” a beat (haha) if the game thread lags more than e.g. 1/16th note or whatever granularity your sequencer might use.

Would love to see something like the above PlayScheduled function from Unity, it worked great for my case :wink: