Best method of real time buffering and editing live link input?

I am trying to do real time production/modification of live link face (iOS app) supplied blend shapes synced with externally provided live audio input. This means I need to run the data through a buffer to delay broadcasting and allow for changes before the finished version reaches the model. The modifications performed would be playback speed and repetition, using both with some calculations to keep the delay constant. It will be similar to a stuck tape feed glitch; if you are familiar with Max Headroom that is exactly what I am trying to do.

I’m seeking guidance as to where this process should be taking place for best results, also if there is any resource I haven’t considered that might help me achieve my goal. Please correct any incorrect assumptions, I will greatly appreciate you for saving me another dead end rabbit hole!

For the morph targets, I am not sure of how viable these are but I am currently considering one of the following:

  • Inside blueprints and/or a created plugin, a circular buffer of frames that can be collected, edited and then fed into the mesh.

  • A python script that handles the buffer and modification before passing onward to the engine/model.

  • Creating an external ‘man in the middle’ application that receives the iOS app input, buffers the data and modifies it before passing it onward to unreal.

My blueprint method concerns - creating an accessible circular buffer within unreal; potential slowdown. There is a chance that baking in some time codes while handling the audio could enhance the results here.

Python script concerns - I’m not sure i’m thinking about this correctly or if this is even possible with the implementation of python available

External 3rd party concerns - Can I ‘spoof’ the rebroadcast? Is it all handled over UDP? Full control might make this easier and allow me to use more external resources to achieve the same goal. Also this might allow me to use a secondary machine dedicated to the task.

I think your best option here is to route the data through a LiveLink virtual subject and operate on the buffer of frames there. You can do this with BP or C++. Warning on using BP, operating on large arrays of frames in BP will be heavy on performance.

The data would go: External Subject > LiveLink Virtual Subject > Apply in animgraph