I’m working on a real-time immersive room (~16 m × 5 m) with nDisplay driving a floor and a wall projector. User tracking is via LiDAR, and the interaction logic moves in-scene objects based on user position.
The scene is large and the nDisplay viewpoints are necessarily placed far back from the screens to cover the entire surface area. This creates significant perspective distortion, especially on the floor projection: interaction objects had to be moved considerably downward and forward relative to the camera frustum to appear at the correct visual position on the projected surface. This distortion makes it very hard to tune the interaction system: the world-space positions I set in the editor don’t intuitively correspond to what the audience actually sees on the projected surfaces.
The real compounding issue is that I cannot test on the actual hardware (the installation is in another city), so I need to be able to validate interaction behaviour entirely from my development machine.
Is there a way to preview the nDisplay output (with the correct per-viewport frustum and projection) directly inside the editor during Play-in-Editor (PIE), without having to deploy to the full cluster?
Specifically, I’d like to be able to see what the floor viewport camera actually renders (distortion included), so I can place and tune interaction triggers with confidence that what I’m seeing in the editor matches what will be projected.
I tried placing a camera at the viewport position and using a Render Target, but it doesn’t replicate the exact off-axis projection nDisplay applies. The in-viewport nDisplay Preview actor helps for layout but isn’t usable during PIE.
Is there an officially supported workflow for this? Or any community approach (Blueprint, render target tricks, custom preview actors) that gets close to a true per-viewport preview during development?
Any help appreciated!