I am not a graphics programmer so feel free to treat this as the ravings of a madman: But I think it might be possible to achieve at least the same visual effect without fundamentally changing everything about the engine.
All you need is the ability to render multiple views (in linear projection) and then stitch them together in a single view that can be mapped to any projection you like.
I made a “fisheye” “camera” a while back that did this. Using render targets for the views and a spherized cube mesh to merge them. I got the idea from fisheye quake which does something similar (except they render a full panorama, and they don’t use a mesh because they’re smarter than I am)
The main issue with my approach is that I’m only able to seamlessly merge the final color, not the individual gbuffers. So any screenspace effect that relies on information from other parts of the screen will not work correctly. However… If the engine had an innate ability to merge multiple render views then I think this could be done.
Conveniently… I think the engine can already do this. It looks like that is how the CaptureCube actor works (rendering 6 views with linear projection and stitching them together). The screenspace effects from CaptureCube behave as if it were a single image. Which suggests to me that the gbuffers are being merged.
Edit: I was wrong Capture cube shows individual seams. Could have sworn it didn’t but I was wrong.