Experiments for understanding Quartz, timing and BP delegates

Hey everyone!

I have read this thread and curious to get a better understanding of Quartz, I have decided to try something similar.

In order to understand what is going on and to learn about the subsystem I have created a little experiment where I have a metronome in a Metasounds patch and a Quartz Clock triggering a click in the Metasounds patch, making a second metronome.

I am then measuring when the Quartz delegate is called, triggering clicks (by measuring real time seconds and foreseeing when a beat is supposed to happen depending on BPM).
I am repeating this test varying application framerate and buffer size in project settings.

Screenshot of how the blueprint looks (sorry for messy code, tried to make everything fit in a screenshot)

Screenshot of the Metasounds patch (top sine + AD is click from blueprint, bottom sine + AD is metronome through Trigger Repeat)

Metronome starts when the first click is triggered by the BP (from the first Quartz beat).

As you can tell from this preliminary study, I am recording 120 beats and comparing the Metasounds Metronome with what Quartz is making, both in milliseconds from that commented part in the BP, and in sound, via recording in the Metasounds patch.
Recordings for this test are a minute long, since I am playing at 120BPM.

In this case, the Quartz metronome is not as precise as the Metasounds one. I think it makes sense since I am triggering in Metasounds from BP instead of using PlayQuantized. These tests are a measurement of local latencies before I move to a second application that will send OSC messages through UDP, either locally or on a LAN.

What I was not expecting, is that the Quartz metronome is often playing in advance and by a good measure, apparently.
I am not sure if my method is lacking or if that’s how it is supposed to work.

I would imagine it is because it’s scheduling the function call to the closest moment possible, which is actually the frame before, but I don’t understand why sometimes we are an entire frame deltatime in advance.

Here is an (extreme) example of the difference in metronomes. The top click (higher pitch) is the Metasounds metronome, sample rate precise.
The bottom one (lower pitch) is the Quartz triggered Metasounds click, buffer precise(?).
This test is “extreme” because of my callback buffer set to 4096, but happens in different magnitudes at all buffer sizes.

Is this because of how scheduling works between Quartz and BP delegates?

When locking framerates I notice that when latency is different than a <1ms is usually the deltatime or -deltatime of the FPS set. At 60 for example it is almost always 16.67ms, -16.67ms or very close to that.

Example of some of the measurements.
deltatime

I have repeated this test in a variety of different settings:

Framerate

  • locked at 60 fps (with nothing but the BP in the level)
  • unlocked (running from 250 to 300fps)

Buffer sizes

  • 64
  • 100
  • 400
  • 1096
  • 4096

Other parameters that I have kept stable are sampling rate, at 48’000hz, rendering quality, on low preset and of course I have made sure that the power state of the testing computer would stay the same for the entire duration of the tests (no throttling measured).
I have made sure that UE always had the same resolution when playing and that focus was always on.

I had some fun getting these measurements and tracing them on Excel. Here are some graphs, some patterns are very consistent.

Results with 64 samples per buffer and 60 FPS. Would I keep recording this, this pattern would eventually repeat, I would say consistently but I have no proof yet.

Same but with 1024 buffer size
MeasurementLine1024x60

Same but with 4096 buffer size (please note that vertical scale here is much wider)
MeasurementLine4096x60

I will post more measurement images if needed. I did not post any unlocked framerate graphs because it is not feasible for the use case I have in mind. The same goes for sampling rate. Of course results where more volatile in terms of measurements (and with lower latencies in case of low buffer sizes compared to fps locked at 60) but why this is happening has been explained perfectly by Max in the thread I quoted at the beginning of this post.

Here is some of the measurements together:

What do you think? Am I doing something wrong?
Would I get any benefit moving to C++?
Anything else I could do to tweak even more? Would you like to see more tests? Longer tests?

Notes:

  • I have tried buffer sizes 100 and 400 because they are dividends of 48000, I though I could be interesting to see if there could be a difference when you have a perfect division on a 60fps span. Of course 60fps is not precisely 60 every second, likely invalidating any possible benefit.

  • I am not planning to use this + OSC to make time critical musical functions, I am mostly doing this to optimize what I do and learn more for the future. Science is fun, even though I feel like I am often not good at it.

  • All tests were executed on a Lenovo Legion 5 with an AMD 5600H and an Nvidia 3060, on Windows 10.0.19044, using Unreal Engine 5.0.3 on a fresh project. Audio interface used is a Focusrite Scarlett 8i6.

P.s. I would like to say a big thank you to all UE devs for all the great work you are doing with Quartz, Metasounds and Audio in general. I think I have watched all of the youtube content there is in where I can find Dan and Aaron. Some presentations I have even watched more than once!

1 Like