Music Timing System

Hi, I’ve been trying to get a stable timing system to use to trigger musical transitions between one piece of music and another. Obviously these transitions need to be accurately timed - but I am finding this impossible at the moment. Basically what I need is a reliable and consistent ‘metronome’ so that I can trigger different musical events on defined beat points.

I’ve tried setting up a looping retriggerable delay node which then triggers a music cue, but the timing of the delay loops are off so there are inconsistent gaps between the musical events. My next step was to build a blueprint system using the event tick node and the deltatime value to calculate the timing intervals, but I’m getting widely inconsistent timings between ticks (this is based on a custom Kismet object that I created in UDK that works fairly reliably). Just in case I’m doing something wrong, here’s a description of my tick-based system along with screenshots… When the system is activated it stores the current time using the Get Real Time Seconds node and uses a pre-defined bpm value to calculate the various different beat division intervals (ie. 120bpm = 2bps = 500ms for a 1/4 beat, 250ms for a 1/8 beat, etc…), and then calculates what the time point for each of these intervals. So, if the current time is 5.385secs and the beat interval time for a 1/4 beat is 500ms, the next time point for a 1/4 beat is 5.885secs (5.385 + 0.5). Then every tick the system checks the Get Real Time Seconds node and compares this to the ‘ideal’ beat time. If the actual time falls within a window (half the deltatime) of the ‘ideal’ time then an event is fired off to trigger the musical event and the next ‘ideal’ beat time is calculated. So, if the deltatime is 0.03secs and the ‘ideal’ time for a 1/4 beat is 5.885secs then an ‘actual’ time of 5.885+/- 0.015 would trigger an event.


Even if the ticks were consistent, the likelihood is that the ‘actual’ (accurate) time point of a musical cue might be inbetween ticks and so using a frame-based timing system is going to be problematic as we are very sensitive to timing issues within music.

Does anyone have any suggestions on how to get either a reliable and consistent timing system within UE4, or more ideally how to use the sample accurate timings from the audio soundcard through the creation of a custom C++ based object/actor.

You could see the ammount of ‘off time’ and include it for whatever you are doing.

Could you explain what you are doing that need this accuracy?

I’m attempting to create a metronome system so that I can trigger musical events at specific (and accurate) musical timings. With a view to creating a generative music generator.
Because we are extremely sensitive to timings within music this needs to be fairly accurate, but more importantly it needs to be consistent and that is the main problem I’m having at the moment…
So, lets say we are using a musical tempo of 120bpm, then every beat (pulse) would occur at 500ms intervals, and then common sub-divisions would be at 250ms and 125ms.
If the tick events were consistent then half a frame’s worth of ‘error’ in the timings may not be that noticeable at the larger interval times, but could become problematic as the intervals get smaller.
In a ideal world I would be able to fire off an event at a specific time interval, but using frame-based systems for timings means that you can’t do this as the ‘ideal’ time point may well fall between frames.

Does that make sense?

I don’t think you’ll be able to get such accurate timing from the tick system, it’s not designed for that. I would do it on a separate thread if I were in your position. I don’t know much about UE4’s sound system and if sounds can be started from different threads. If not, at worst case you’d need to write your own sound mixer and output the result into a buffer that’s streamed by UE4 (that should be possible, since vorbis audio files are decoded in separate threads).

Pedro is probably right. Have delta time print to screen every tick to see what I mean. It’s not all that stable.

I’m in the planning phase of adding music to my game also. The best way I can see to sync tracks is to have them all start at the same time and fade them in and out. This might be a bit of a weight on the processing and memory, but I’m hoping it will be OK. Portal 2 has several tracks all playing at the same time, that just fade in and out.
Please let us know if you do manage to work something else out! Audio in UE4 still seems pretty primitive (as it was in UE3).

maybe this could help you

https://forums.unrealengine.com/showthread.php?4138-Audio-Virtualization-WIP-teaser-vid&p=32921&viewfull=1#post32921

good luck

Alas, there is not currently a rock-solid timing mechanism for music and audio playback to a tempo. The suggestion of timers is a good one; we used a system like this on Bulletstorm to allow branching on a measure-by-measure basis. This was done at the C++ level though. Audio will at some point get its own thread, at which point this functionality will be possible natively. Sorry I couldn’t be of more help.
Best-
Zak

Hi guys,
Thanks for the replies!
The frustrating thing is that I built a custom Kismet object for UDK using UnRealScript which works in the same way as my Blueprint attempt (ie. based on Tick and deltatimes) and while it was not 100% accurate it was good enough for most purposes… Guess I may have to look into C++ scripting…

@SuperNovaBen - Yes, your idea would work fine for parallel music stems where you just want to fade tracks in and out. But I want to be able to transition between different pieces of music, and have the occasional stinger and so need a system that keeps the musical pulse accurately.

@ZakBelica - Are you able to offer any advice about how you went about this at the C++ level?

If I get anywhere I’ll be sure to share…!

teed

Hello again Teed,

I’m really interested in this (I used to do a lot of sound design and engineering). Having a good tempo system would of course be ideal, but there are other work arounds.

Let’s say your music is comprised of various phrases and passages. You could cut them up and loop them a certain number of times then move on to the next if required. If you needed to change the music to fit the context (e.g some action started or the mood changed), you could wait until the end of the loop currently playing and change the music then. Of course, depending on your loop lengths, this will have some delay between the situation changing and the music.

My first idea about cross fading (which I think you understood) was that if all the pieces are at the same tempo, they will be synced automatically as long as you start them all at the same time. You can cross fade at a speed of 1ms to make the music jump to the next piece depending on the situation. This would work if you only needed say 4 pieces within a level/environment.

If you are trying to make a music based game, these aren’t really good options. Anyway, just thought I’d share some ideas.
By the way, how accurate is ‘Real world time’?

Hello!
To be honest, I think that having a reliable tempo system is a bit more than ‘ideal’ I think it’s pretty much ‘essential’ if you want to be doing anything interesting with music…! But given that that’s my thing - I suppose I probably would do…

Your ideas are good ones and would work for some situations, but I want to be able to create a system whereby I can specify transition times between different musical events (be these loops, stingers or transitions). So, yes you could just wait until the end of the loop, but if your loops are a reasonable length (in order to avoid a sense of repetition) then the delay between the context changing and the music changing to reflect this could be relatively long. If we had a reliable tempo system you could choose to apply the transition on either the bar line or even smaller beat sub-divisions depending on fast you wanted to respond and/or the musical material.

Cross-fading works very well for some applications although you do need to bear in mind the cost of having multiple long pieces of music all playing simultaneously. have you tried this yet? How does a fade time of 1ms work?
My initial concern with a fade time of this speed would be that given that things can only be processed/applied in terms of frame (deltatime) time and given the fact that 1ms is considerably shorter than frame time, this would produce the effect of the audio instantaneously jumping between volume levels, thus producing audible artefacts within the audio…

As far as I can tell the [Get Real World Time] and [Get Real Time Seconds] nodes are fairly accurate (although, I have found some very small inconsistencies between the two) I haven’t done too much in the way of rigorous testing… The main problem with checking all of this is that you can only check their outputs at frame time which is inconsistent!

Hi teed,

As has been stated, the Delta time is not ideal as it affected by framerate, which can certainly throw off a metronome if you have a faster/slower pc than others. As a musician myself I understand the need for consistency and accuracy with a metronome system, but unfortunately I have yet to find a solution that is consistent and exact. Please let me know what you come up with as I’d be very interested in seeing what you are able to do!

Hi teed,

I’m trying to develop something similar and have run into the same problems. I did find some interesting BPs in this PlugIn from :

but they don’t actually solve the triggering conundrum. Am interested to see how this progresses with future builds!

Hello,
i did that simply blueprint for someone who needed a radio. :

the thing is that by setting sound it stopped the old one. In hope it helps. Maybe with differents ambient sound system at same place you can mix different timelines delay and maybe var them separately and select the sound to set in each timeline so you know where it is in time…

I’m not able to test this yet, as I’m not at my workstation, so at the risk of looking silly… has anyone tried using Timeline objects?

Set up a timeline object to run for 2 seconds, and loop, and add an event track that fires off an execution pulse once every 0.5 seconds. That should be giving you quarter notes at 120BPM, no…?

That at least decouples you from the tick pulse, so it’s no longer framerate-dependent, and using a timeline node over a simple delay should give you more stability in terms of slowdowns accumulating.

I’d think the biggest problem with audio sync wouldn’t even be the timing itself, it would be queueing; in audio software programming, when handling real-time audio, there’s always some sort of buffer (on a good card usually like 2-4ms, on others it’s substantially higher) because audio buffers fill up inconsistently owing to the other processes running at the same time. Even if you could “fire off” execution pulses at the right time, Unreal still has to queue all those audio playback commands and execute them, which won’t always have correct timing. An audio sequencing system is usually looking ahead to the events that come in the future and prepping them, loading audio into memory, decoding, etc., and then playing the audio at the predetermined appropriate time. I would suspect that most dynamic music systems in games would tend to start all audio playing back in simultaneous loops and blending between them, as mentioned ITT, rather than trying to build it in real-time; if you need the game to do something in response to player input, you can’t look ahead to it. Even with a good audio card and using a dedicated spec like ASIO, it’s a challenge getting a machine to reliably fire off musical events in response to real-time user inputs. Trying to do so while ALSO doing everything else that the game needs to do in real-time seems like it would be nightmare-tier, especially considering most gaming rigs are not built with audio performance in mind. If all the audio is already synchronized, then all that needs doing is some very quick multiplication for volume control, rather than all the other tasks associated with playing back an audio file.

Just tried the timeline method and seems to be working well so far! Haven’t had much chance to test it out properly but will let you know how I get on. Good shout RhythmScript!

I think, when planning this type of subsystem, one need to think about a more grand overview of your game mechanism.
Like how RhythmScript described, it would be basically impossible if you don’t have some external hardware to just feed you the stream in concurrent with your game, since you have the same cycles to run but with many things to share your cpu time. But, in a design point, how much delay you can tolerate or how accurate you needed to be is tie to your game.
Say for a guitar hero game, you run maybe at maximum of 5 mins per session, with regard of possible input/buffer lag, as long as it doesn’t destroy the “feeling” of in sync, you should be safe.
I don’t know how accurate would be a proper threshold for musician, but let’s say for a 60fps game, you have about 16ms or less window trying to adjust your event. 300bpm it’s at around 200ms per beat, and I don’t think many people if at all can do 300 bpm without help from electronics. And then you can test if you can tell the difference on a 60fps based game by intentionally creating delay, ie say pad another 5ms per beat and see when you actually notice that your beat is off. After all this is done you can say on a 60fps minimum requirement system, I can run how long in a game with artificial lag and still manage to be “in-sync”.

Then you try to see how you can implement a system that you can offset the lag(at least hardware clock runs independent of your software) when you detect that it goes over a certain range, in the 300bpm example it should be like:

  1. this beat is sending to buffer and I know if I do nothing the next beat event comes I’ll have around 2~3 frames lag(ie 45ms), assume that would be noticeable.
  2. next timing event comes, I know it’s not the time yet(since it’s 200ms space), but start to count what’s the potential event timing to correct the 45ms lag.
  3. when your close to run into the correction range, say you have timer with less than 5 frames to play the next beat, try predict with your average delta.
  4. then you offset your beat by 2~3 frames time, or by 1 frame and do the same for next beat.

It would not be a exact system, but it would be pretty good for plenty of people.
Just so you know that some modern display already have variety range of input lag, and most people that play music games aren’t even complaining.
Note, I have never done such thing before, but I just analyze the problem, and suggest what I think would be a proper solution.

1 Like

Hi Penguin,

The problem is running two audio events synced together. Input latency is not a worry as input is controller driven and so latency will be incredibly low (no audio processing). When devs talk about controller latency, they are referring to input + visual output latency (including things like screen refresh rates, like you said). Frame rate is variable, so nothing should be based off this.
Audio software such as ProTools and Logic run graphics and audio engines separately, with the graphics engine lagging behind the audio and exclusively drawing to the screen. The audio engine handles all other functions (user input, pluggin processing etc.) to keep sync.

From experience, 25ms+ is noticeable lag from input to audio out. And adding 5ms to a synced audio stream is very noticeable.

UE4 already bases it’s delta time off the framerate avoid accumulative latency across the whole game.

The problem in this thread was just getting a clock, from within the engine, accurate enough to keep tempo with an internal music stream. Most likely this would require a separate, high priority, audio engine that receives commands from the main game engine. That or just a real clock in UE4 (>.<)

P.S Teed. I suggested 1ms as a very fast transition that doesn’t cause pops due to low frequency waveforms being cut in the middle of a cycle. Anything would be OK really.

[=;116201]
Just tried the timeline method and seems to be working well so far! Haven’t had much chance to test it out properly but will let you know how I get on. Good shout RhythmScript!
[/]
This post was written long time ago, but how long was the latency?

Has anyone else still been doing something like this? I’ve done a system that builds music in real time from consecutive 2-second segments. Timeline simply has a looping 2-second event track with a single event key. Latency seems to fluctuate from less than 10ms to over 70ms at worst.

Has anyone managed to keep latency etc. under 20 ms?

Another active music developer here looking to work inside ue4 with music.

Has there been any developments on this front?

+1 for sample accurate timing inside ue

A simple way to get sample accurate timing in UE4 is to use FMOD…They have a ‘OnTimeLineBeat’ and ‘OnTimelineMarker’ blueprint event nodes which fire precisely. I’ve tried and tested these and it works well. If you are comfortable with Cpp there’s a whole bunch of useful stuff in their api…Might also be worth checking out Wwise.