Efficient Voronoi Split-Screen

I’m wondering if anybody has experience implementing a Voronoi Split-Screen system in Unreal? I have found some approaches for this (such as the one here), which involve rendering the entire scene multiple times and just masking between the two. The issue I have with this is that you can ultimately end up computing twice the amount of pixels you actually need, and it won’t scale too well.

My plan is to implement this in such a way that when the players are within a certain distance, they will share the same screen space. As they move away from each other, the voronoi will seamlessly blend in and I would switch to two different cameras (or use scene capture, and not render from camera perspective at all) - and draw only the required pixels directly to the GBuffer. The game may ultimately have more than two local players.

I realize this is more than likely going to require engine-level changes but I think this is the most efficient way to do it and the only real option for scaling a particle-heavy game for console. I’m kind of going into uncharted personal territory for this, so wondering if anybody has any clues on where to start, or whether this is a good approach?

Edit #1:
So I found an example shader on Shader Toy that does pretty much what I want (although it’s blending / smoothing isn’t great) - and dynamically scales between players. The approach here seems to be to create a fake camera for all players, then separate ones for groups of players. Ideally, I want to combine this with this

The final function is the key part really - the shader iterates every pixel in screen-space, works out which “camera” it belongs to and renders that pixel from that camera. To prevent using 4-5 full colour render targets in UE4, I guess I want to inject something similar into the UE4 renderer?

Any clues on where to start? Paging @DanielW and @

The voronoi part is really easy. You just need to get the position of each character as a parameter. Then for each screen pixel you get distance to each, and you find out which is closer. Then based on which is closer, you mark a solid color down. Then you use that value as the index of your two render targets.

yes this does seem like a hugely wasteful approach. Your render targets will have to be rectangular. You could optimize them to fit the exact extents of the voronoi shapes (but they’d still be rectangular), but since you will have to support the case where the split is 45 degrees, you need to make it run fast enough in that mode so you might as well wait until the very end before trying tricky optimizations like sizing the render targets to the voronoi extents.

@

Yeah and I can imagine that resizing the render targets per-frame would also have some inherent cost of it’s own too, it might end up being faster to just have four full-size render targets and capture components for each of the players and allocating them at startup.

I wonder if I could do this by still creating the full-size render targets, but only part-filling them and then masking between each using a post-process shader or similar. The expensive part is computing the final colour of each pixel in the RT (I guess), so if I could somehow pass a stencil to the scene capture which says “don’t bother with these pixels” - I could probably save a big chunk of rendering time perhaps?

In an ideal world, I want to support up to four players for local split-screen. I’m not sure if a 2-bit texture format exists, but that would be enough data to store a stencil mask for four players (00, 01, 10, 11). My approach would be:

  • Render the stencil mask on CPU before main render thread runs.
  • Clear the four render targets to black
  • Pass stencil mask to each scene capture, and have them only capture the required area.
  • Blend between each of the four render targets using the existing stencil mask, then draw to screen.

Is this something that the engine would support natively? I guess the only problem with this approach is that post-processing would be run on the entire screen, not each individual render target, so there could potentially be leak from bloom etc. If post-process is applied before the render targets are drawn to screen, it might be easy to get rid of that by multiplying the processed render target with the stencil mask…

Mostly thinking aloud here. Is this something that seems viable? Not sure where to even start looking :wink:

Time for a bump. Started working on this and sort of have the basis of it working. My player clustering / Voronoi method isn’t the best though…

Currently the manager creates a render target and scene capture for each voronoi chunk, and updates it only when that chunk is visible to save performance. Unfortunately, rendering even a simple scene four times (at full resolution, no less) is way too expensive, so this method isn’t really going to cut it - especially if I want it to run on PS4. FPS drops below 60 on a GTX980 on my machine. Even if I trim the render target size to fit within the chunk bounds, I’m still probably not going to save a lot of performance and there will be edge cases where it will still drag behind.

Basically even at this stage, it’s easy to tell that creating separate render targets is gonna be too much. RT’s also don’t support a few rendering features like Motion Blur etc… so this is a complex problem. The ONLY workaround I can think of here, is to continue to generate the screen-space Voronoi mask… then when rendering the GBuffer, the renderer needs to figure out which camera to render each pixel from based on the Voronoi mask. There must be a way to do this right?

Unfortuantely, that last part is way outside my area of expertise - so seeking help here!

Making tighter frustum planes for cameras would help with geometry pressure.

Looks interesting as technical implementation, but this is weird in game.
This was used in Renegade Ops for 2 players coop, and it’s really useless.
Every second you think about where is your character now. Dynamic screen like that confuses.
I’d rather use some static way split screen.

Static split-screen doesn’t really work for this game, there’s a design reason for implementing it as Voronoi instead. The location of players relative to other players is one of the most important aspects of the game. At the moment though, the render target method seems like the only potential approach - but that limits the game to 2 players at the very most.

This Presentation shows how it can be made to work very well, and keep boundary points reasonable.

Alright so original idea wasn’t working too well and Voronoi for more than 2 players is tricky… so I just got it working for 2 players first. I’m using a single Render Target this time, and splitting view via a post process blendable that blends between it and the default player camera.

Render Target is still costly but this is probably totally useable for two players. Not had a chance to run on PS4 / Xbone yet, will be interesting to see how it copes (effectively, I’m doing 2K :/)

Still keen to get away from Render Targets though and save processing every pixel for both the GBuffer and RT.

@
This is really cool!

If you don’t mind using a custom engine build I have a couple of ideas on how you can laverag the scene viewa to achieve that.
Basically vr rendering is a simple case of what you wand including masking out dead areas.
I’ve been looking for a good rendering pipeline challenge and I’ll be glad to help.

I think I have another hybrid approach using the split screen system and something that is based on vr rendering.

The key component is the ISceneViewExtension class.

So this is how this will work (general outline).

  1. Create a new ISceneViewExtension derived class that will act as the manager and will be in charge of the screen split view port allocations.
  2. Create a new ULocalPlayer derived class and override CalcSceneView function to check with the view extension if it should create a new view or not.

The key locations in the code you should look at are

FSceneView* ULocalPlayer::CalcSceneView( class FSceneViewFamily* ViewFamily, FVector& OutViewLocation, FRotator& OutViewRotation, FViewport* Viewport, class FViewElementDrawer* ViewDrawer, EStereoscopicPass StereoPass) in localPlayer.cpp

void UGameViewportClient::Draw(FViewport* InViewport, FCanvas* SceneCanvas) in GameViewportClient.Cpp

and SceneViewExtension.h

I would certainly rather use cameras, because the problem with Render Targets is they don’t have Motion Blur and Anti-Aliasing. When the screen splits, you can clearly see the bloom affecting one RT more than the other for example.

@lion032 - I don’t mind using custom engine (gonna have to eventually for Console builds anyway, and I have a build machine on the way). Thanks for the pointers, I’ll investigate that when I come back to optimizing this a bit more and trying to fix the disconnect between the two render targets. I’d have to essentially trick the engine into generating two viewports that occupy the same space - and I don’t know if that’s possible (yet). Is the GBuffer sized and rendered for the whole screen, or duplicated, made smaller and rendered for each eye independently?

Then it’s a question of working out which one to draw where. Viewports are always square, so blending between at an angle will be tricky. I’d have to create some material functions in code that allow access to both GBuffers, not just the one.

The best way to save performance here, would be to pass the mask to the renderer directly. You can’t do any culling as such for each view (as far as I can tell), because the projection matrix is still the same for either viewport (and square), but you could perhaps save on final gathering for the final scene colour, depth pre-pass etc. That’s what I figure anyway…

@

The manager will be responsible for creating a mask for each viewport this mask can be fed into the pre pass rendering and even if the viewport is a rectangle only the unmasked pixels will pass the mask.
Doing this for each viewport will fill the final rendertarget with the correct view ports without overwriting each other.

The GBuffers are sized for the whole screen AFAIK and during rendering each view sets it’s correct viewport.

No, each view has it’s own projection matrix.

Have you had any luck passing a mask to the Pre Pass stage of the renderer then? That’s where I’m stuck, since i don’t think it’s possible. Creating the mask is easy.

It shouldn’t be a problem, did something similar at work.

You can add more frustum planes so frustum culling will be more effective.

This is all a little out of my domain :stuck_out_tongue:

Alright so coming back to this after a few weeks, I have a bit more time to work on it and want to get something implemented. I’m sticking to two-player only now to make life easy, but there is still one key problem currently. I’m starting to get the general idea of how to approach this but need more help with the specifics, following from @lion032’s approach.

So as far as I can tell, I can create a class that inherits from ISceneViewExtension, which will replace my current manager. I was hoping that creating the mask would be quite easy, my initial idea was to use DrawMaterialToRenderTarget and use the same material logic I have now to easily fill the render target with a material rather than doing so in C++. Unfortuantely, drawing to a render target needs the render thread to actually run, and the render target needs to be filled before I pass it into the renderer so this won’t work I imagine. Unless I’m wrong?

So first barrier is how to create the mask texture to pass into the prepass rendering stage. Any ideas on that?

Next, I notice in GameViewportClient::Draw - it calls GatherViewExtensions() which means (I guess) that each extension works on a per-viewport basis. This is fine but my Viewports need to overlap each other in order to get the diagonal line effect (they’re both full-screen viewports). This is where I start to get confused - how can I have two viewports occupying the same space and both being visible? Surely that’s impossible? I’m guessing only the viewport size is what matters, so maybe it doesn’t matter than I tell the engine to create two viewports directly on top of each other.

BTW if anyone can provide some code here, that would help me out a tonne - but right now I don’t see how this can logically work.


After thinking about this some more - I wonder if I can just create my own “HMD Device”, and manipulate the eye positions? Guess the problem then is all the distortion and extra culling that get’s applied :confused:


Thinking more about the drawing to render target, all it does is enqueue a render command - so actually it shouldn’t matter since the renderer will do that first anyway.

did you end up making any progress on this? I’d love to know what areas worked out or not.

@ Sorry to ping you 3 years later(dont know if you had made any progress on this) but I dug around and found that inside the game viewport client when a player is added it has to notify the [IGameLayerManager] which is located in [SGameLayerManager.h/.cpp] and is subclassed down to that slate widget [SGameLayerManager], as a heads up this gets created in [PlayLevel.cpp] with the line [TSharedRef<SGameLayerManager> GameLayerManagerRef = SNew(SGameLayerManager)] inside the function [UEditorEngine:: GeneratePIEViewportWindow] so you’ll have to make an engine modification since you can’t set it from the project settings or something(hopefully Epic or somebody will add it to make everybody’s lives easier). The slate widget is told to [SGameLayerManager:: UpdateLayout()] which handles removing any viewports for each [ULocalPlayer] that no longer exists or Add/Update the current player layers using [SGameLayerManager:: AddOrUpdatePlayerLayers(const FGeometry& AllottedGeometry, UGameViewportClient* ViewportClient, const TArray<ULocalPlayer*>& GamePlayers)] so they if any viewports were removed then update the remaining ones to fix their positioning and sizing so you would wanna override this function(honestly Epic should make the positioning part a function so people can override just that and not the other parts but because you cant subclass from the layer manager without engine modification its kinda moot right? But I do suggest inheriting rather than modifying it).

Organized version(requires engine modification):

-OPTIONAL-


UEditorEngine:: GeneratePIEViewportWindow(const FRequestPlaySessionParams& InSessionParams, int32 InViewportIndex, const FWorldContext& InWorldContext, EPlayNetMode InNetMode, UGameViewportClient* InViewportClient, FSlatePlayInEditorInfo& InSlateInfo)

^PlayLevel.cpp


TSharedRef<SGameLayerManager> GameLayerManagerRef = SNew(SGameLayerManager)

^PlayLevel.cpp, recommend changing SGameLayerManager to a child class of it


I consider this the runtime path for when you add a player and it adds a viewport


UGameViewportClient:: GameLayerManagerPtr

^GameViewportClient.cpp/.h, its a variable


SGameLayerManager:: UpdateLayout()

^SGameLayerManager.cpp


SGameLayerManager:: AddOrUpdatePlayerLayers(const FGeometry& AllottedGeometry, UGameViewportClient* ViewportClient, const TArray<ULocalPlayer*>& GamePlayers)

^SGameLayerManager.cpp, inside the for loop its updating the widget’s position and size but you could also overload the widget’s Z order and adjust the transition using this information as well :smiley:

Hopefully I was able to help for figuring out the calculations cause at this point its just modifying the raw viewport widget’s positioning, sizing, and maybe opacity or z order. And if you wanted to go deeper you can make your own custom slate widget thats of type overlay if you wanna modify the shape into something that isnt square/rectangular because they just hold the [IGameLayer] widget that is well… the game but per viewport…

2 Likes

neat! did you end up trying this out? I’d love to get an efficient version of this working AND THEN see if in the same project I can swap this on / off per launch commands so vr players and local co-op players can be in the same world across the web.