Yes it is ! (possible)
The way I understand it, the only way to get a solid rhythm is to detach its source from the game frame rate.
Time Synth is your frame-independent click generator, which runs its own sample-accurate clock, on its own thread,
Where it’s busy keeping DSP time and doesn’t care about game time issues.
Making stuff happen in sync with that click, however, does take place on the game thread (Blueprints in my case) and that’s where timing may shift.
So problems may occur, if I got Dan’s point right, for example when scheduling something to the next 16th note event :
We might find ourselves with very little time to act, and a hiccup in the game could delay our call just enough to make it arrive late,
And miss the Time Synth train that leaves on time no matter what
Ironically, I got in the forum today to ask a question about that (very useful indeed!) Livestream :
In his introduction Dan expresses his longing for a futureland where multiple Time Synths can be linked & synced under a master clock -
I was wondering if there were any news in that area ever since…
Because you see, (please correct me if I’m wrong) back then Dan referred to getting audio onset markers and data, as a “Pie in the Sky”,
And we now have it with Audio Synesthesia, so why not dream about a multi-actor time machine ?
A project I’m involved in could greatly benefit from something like that, so any thoughts would be appreciated.
Being able to spatialize several sound sources and have them triggered in quantization, would be a small dream come true
Either way, big thanx to the dedicated audio team, keep up the great work !