[UE5] Quartz, dealing with a variable BPM?

Overview of Quartz in Unreal Engine | Unreal Engine 5.2 Documentation
" Quartz is a system that works around the issues of variable latency and game-thread timing incompatibility by providing a way to accurately play any sound sample. "

Basically, you create a quartz clock, configure its BPM, start the clock, then play your audio component “quantized” to avoid it play audio out of sync with any event you want to play audio on.

For synchronization purposes it does not seem to matter if the BPM is set to 40 or 400. My current prototype requires me to retrieve the BPM on runtime automatically, as it can change at any time during the audio. There seems to be no tempo detection in quartz. Do I need to implement tempo detection for quartz myself, or is BPM required to be defined constant up front?

If I write a tempo detector tempo will always be an estimation and shift from time to time, possibly defeating the purpose of quartz? I am just looking for the right way to get a runtime detection of BPM and get quartz working with it. I need to execute a method “tapping on the BPM”.

I already use the Audio Analysis plugin to detect a beat over a set frequencies but it does not return a “tap on the beat” event or current BPM. This makes a beat detector differ from a tempo detector. So far I have not found a Unreal ready tempo detector.

Quarts largely deals with sample timing.

If the tempo of a musical performance is constant, then there is a constant relationship between number of samples played, and song position in bars:beats. This is probably the “constant” you’re talking about.

tempo timing is actually different, and is a pretty complicated feature in tools that do full musical scoring (Nuendo, etc.) Trying to follow both a fixed sample rate, AND a fixed film timeline, AND fit a musical score to the right beats, is surprisingly non-trivial.

What are you trying to do, exactly? Where does the music come from? If the music comes from content you create, then it’s absolutely simplest to pre-process all the music to mark up the beat locations in a separate metadata of some sort. This will then let you map “number of samples played from song” to “song bar:beat position.”
Because manual markup is tedious, you may want to also develop some offline tempo/beat detection tool, but typically those tools will give you a first rough sketch, and you will want some way to fine tune the resulting output, removing false detections, and adding missed detections.

If the musical program comes from the user (user plays arbitrary sound) then the problem you’re trying to solve is much harder. There’s a variety of methods to do beat and tempo detection, which all vary along “quality” and “complexity” and “runtime performance” and “robustness to different kinds of material.” You’ll have to do an engineering evaluation: Try a bunch of them, and come up with one that works well enough for your particular application. (I don’t think any of them come integrated into Unreal, so you’ll also need to actually integrate the one you like once you’ve decided on it.)

1 Like

Yep :confused: I have some research papers suggesting how to implement it but I could easily waste a week on it as I am no mathematician. It’s a multi step process analyzing which points on a sample are most likely to be the “tap”.

I tried registering the first beat by energy over a frequency range and then conditionally waiting for the next. I’ve done that with aggressive filtering and taking an average of the past 10 passed beats, which led to missing beats and a slow adaptation time. Wasn’t usable at all. The papers show a more complex method but more reliable.

I am surprised some games like Crypt of the Necrodancer handle this absolutely perfect. Just like that game I am dealing with runtime audio (someone records or imports audio) and this audio is played and altered during gameplay (speed etc.). For now that is mp3, wav etc but I might upgrade it to support web streaming as well. Can be jazz, ambience, trance, anything really.

:frowning: Was hoping I just missed it while scanning the code, but that leaves me with no other option than to experiment with it

If you configure the sound in question ahead of time, you can always run a pre-process, rather than doing it in real time. This allows you to do a better job, because you can look ahead!

Even when the player chooses a file “while the game is running,” as long as you can read/decode the entire file at once, you can pre-process it to generate the tempo track.

Simple beat detection can be done with a narrow high-Q filter that detects energy in a particular frequency area. The main challenge there is that the simplest filters (“bi-quads”) aren’t particularly stable for low frequencies. One thing that helps is down-sampling 16:1 through a simple averaging function, which will make the relative frequency of the bi-quad higher by that same factor. Also, use double-precision floating point!

Fancier methods use fast fourier transform, although the FFT is necessarily kind-of bad at time resolution – and the more precise you are in frequency, the less resolution you get in time.

Even fancier methods use neural networks.

Once you generate potential downbeats, “hypothesizing” about where the beats go, and looking for beats “on” versus “off” the grid, can help you clean up the data. The question really is what level of complexity you’re prepared to deal with.

I like this, this could be done during the import process before the gameplay starts.

Thanks for this info :slight_smile: . I do have the beat detection, onset values and I filter out beats which are detected after another, say a drone / hum sound. Combined still results at a bad precision, a tap or too too few or many at times. I honestly had no idea it is this complex. I’ll share the papers as well which I talked about earlier:
taslp2014-tempo-gtzan.pdf (1.7 MB)

What keeps me going is that Crypt of the Necrodancer is the proof it can be done in an automated process. I am going to mark your response as the solution and I will run some tests next week to see what I can do / what challenges are ahead.

For a human it’s surprisingly easy to tap along with music and figure out beats and tempo. Haven’t seen an AI solution yet but as long as the hardware reqs are low (memory here probably) that could be a fine solution. AI can do what we can do.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.