Quick thought: live video feeds as interactive control input in Unreal

Hi,

this is a slightly unconventional thought, so I’ll keep it short.

I’ve been talking with engineering students and practitioners who already use Unreal for simulation, visualization, and virtual production. One thing that keeps surfacing is how natural people become when interaction is grounded in real-time visual feedback they can directly manipulate.

So the simple question I’m poking at is this:

What happens if a stable live video feed is treated as a first-class interactive input inside Unreal — not primarily for media or visualization, but as part of an explicit control loop?

No autonomy, no robotics claims, no big theory.
Just: camera → GPU → Unreal → human interaction → response.

Unreal already ingests live video extremely well. Broadcast and virtual-production workflows already solve latency, jitter, and sync. What feels underexplored is deliberately closing the loop and asking how far Unreal can be pushed as a perception-to-action substrate rather than a renderer.

At minimum, it seems like a very compelling sandbox for students and developers to explore timing, responsiveness, and control intuition on the GPU.
At best, it might explain why some interactive systems feel “alive” while others never quite cross that threshold.

This may be trivial, or it may be surprisingly interesting — which is exactly why I’m curious.

If this is already common practice inside Epic Games / Unreal teams, I’d genuinely love to learn from it. If not, it feels like a low-risk experiment that fits Unreal’s real-time DNA perfectly.

All the best,
Giuliano