I’m wondering if anybody has experience implementing a Voronoi Split-Screen system in Unreal? I have found some approaches for this (such as the one here), which involve rendering the entire scene multiple times and just masking between the two. The issue I have with this is that you can ultimately end up computing twice the amount of pixels you actually need, and it won’t scale too well.
My plan is to implement this in such a way that when the players are within a certain distance, they will share the same screen space. As they move away from each other, the voronoi will seamlessly blend in and I would switch to two different cameras (or use scene capture, and not render from camera perspective at all) - and draw only the required pixels directly to the GBuffer. The game may ultimately have more than two local players.
I realize this is more than likely going to require engine-level changes but I think this is the most efficient way to do it and the only real option for scaling a particle-heavy game for console. I’m kind of going into uncharted personal territory for this, so wondering if anybody has any clues on where to start, or whether this is a good approach?
So I found an example shader on Shader Toy that does pretty much what I want (although it’s blending / smoothing isn’t great) - and dynamically scales between players. The approach here seems to be to create a fake camera for all players, then separate ones for groups of players. Ideally, I want to combine this with this
The final function is the key part really - the shader iterates every pixel in screen-space, works out which “camera” it belongs to and renders that pixel from that camera. To prevent using 4-5 full colour render targets in UE4, I guess I want to inject something similar into the UE4 renderer?