I’m trying to create a system for angled Split-Screen, and I believe I can leverage the engines existing system to my advantage. I’m hoping that might even be able to benefit from Instanced Stereo Rendering. The game is top-down, and I have two players in the world. As the distance between the two pawns grows, the single view-camera moves up higher in order to capture both players. At a certain point, the camera splits off into two views and follows players around, until they come close enough that they can be seen from one camera again.
I have this working with a Render Target. However RT’s have some significant issues. First of all, I have to render the entire scene twice. Second issue, is that Anti-Aliasing doesn’t work for Render Targets at all. This poses a huge problem for my game, since running without AA is simply not possible. To better explain it, here’s a video of it in action. This DOES NOT show the issues with no AA very well - but in a more complex scene they are extremely visible and so this system will not be shippable. Bloom is screwed, the view is incredibly aliased and flickering is too intense. It’s also far too slow for Consoles even in a simple scene.
I now want to try and change this system so that instead of using a render target, I actually fill the GBuffer from the two different viewpoints. Although I am certain there will be some artefacts from screen-space processing around the split area, some AA with small artefacts is much better than none at all. I can also always try switching to the forward renderer and using MSAA instead in the future. It should also be much, much faster - because this solution is way too slow.
First however, I need to get to the stage of filling the GBuffer. What I want to do is generate a render target that will essentially be the “Mask” between the two zones in screen-space. Essentially it’s a stencil. This should be easy enough, if anything I can do this with DrawMaterialToRenderTarget and leverage a material for it.
What I then want to do is send that texture to the Renderer, and the GBuffer will encode each pixel from a different viewpoint depending on the colour of the render target. As far as I can tell, this needs to be done in two passes, since everything in the scene needs to be transformed from one viewpoint to the other. As far as I can tell, HMD’s basically do exactly what I want, only they only draw two rectangular chunks instead of two arbitraily shaped quads.
So my question is - has anybody done this at all, or does anybody know how I could leverage the HMD system, so that I can create an angled split-screen display instead of two side-by-side viewports? The Angled-Split is crucial to the games’ design - I really don’t want to have to revert to regular split-screen.
I’ve looked at ISceneViewExtension but have trouble understanding what it’s actually doing. Additionally, I’ve studied DeferredRenderingCommon.usf and in particular the ‘EncodeGBuffe’ function - but I can’t work out how to make it read from a texture and choose to whether to skip a pixel or render it based on a texture lookup. I’m hoping that by adding that into the Renderer, I’ll skip a huge portion of render time by not calculating the final colour of unused pixels.
Anyone able to provide any help or pointers?