I think that an alternative solution would be to still use ARKit as a camera, and simply record the camera movement ( if possible ) using Sequencer, including all the other feature ( zoom, focus ), and then use that data separately, so that I can still use “realtime camera data” that has been previously recorded.
I briefly looked at your project, but I still have to fully test it…by the way, I was curious about your green screen setup material, but I haven’t found it…maybe I’m missing something?