Default windows audio is just not good enough for many realtime audio tasks.
The processing of sound takes time, from microphone to reach the processing of dsp code in unreal there is already some delay (on some default pc hardware it could be quite noticable). For the processing of sound in unreal takes another delay since the fft window we use, then another delay happens while soundcard will output the analog signal. It’s not ideal for real time voice processing. Instead look into discrete hardware specifically designed for this kind of task, it can ensure low latency for the i/o. One possible optimization is to go with small window, and implement both ASIO and kernel streaming support in unreal to have low latency audio for the users. ASIO comes with special hardware for the customer but it shouldn’t be a huge problem.
The rest of users can simply turn the volume down at home, so can’t hear the delayed sound. Nobody likes that echo, it causing actual stuttering in speech for many people (like me).
The pitch shift dsp is not updated to 4.26 just yet. Will keep in mind, thanks for asking!
@dan.reynolds 's granular synth setup was a really great demo!
You can maybe try just injecting audio capture comp straight into the granular synth’s buffer. Tho it’ll probably click/pop but you may apply anti aliasing. By default these options available for cpp coders only.