Hi Everyone,
I’m a audio programming student and current working on a DJ game in Unreal. I used Csound for the beat matching (changing play speed without changing pitch and formant) part but there are tons of problems with Csound. Here’s the thing, if you are also a audio programmer you must know that beat matching is all about time scaling (either use a phase vocoding or granular process), I do have some C++/dll libraries that have these functions, and I know how to code this kind of DSP process outside of unreal, but what should I do within unreal? How can I get access to the audio signals/buffers, doing FFT, phase vocoding and stretch, resynthesis it and then output back to the engine? Is there a variables of functions in the unreal sound API can do that? Or I have to get access to the real audio engine code of unreal?
Feel free to correct me since I’m a new bee, it would be awesome if someone in the Unreal audio team who build the engine can answer me.
Thanks!