Announcement

Collapse
No announcement yet.

Efficient Voronoi Split-Screen

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

    Efficient Voronoi Split-Screen

    I'm wondering if anybody has experience implementing a Voronoi Split-Screen system in Unreal? I have found some approaches for this (such as the one here), which involve rendering the entire scene multiple times and just masking between the two. The issue I have with this is that you can ultimately end up computing twice the amount of pixels you actually need, and it won't scale too well.

    My plan is to implement this in such a way that when the players are within a certain distance, they will share the same screen space. As they move away from each other, the voronoi will seamlessly blend in and I would switch to two different cameras (or use scene capture, and not render from camera perspective at all) - and draw only the required pixels directly to the GBuffer. The game may ultimately have more than two local players.

    I realize this is more than likely going to require engine-level changes but I think this is the most efficient way to do it and the only real option for scaling a particle-heavy game for console. I'm kind of going into uncharted personal territory for this, so wondering if anybody has any clues on where to start, or whether this is a good approach?

    Edit #1:
    So I found an example shader on Shader Toy that does pretty much what I want (although it's blending / smoothing isn't great) - and dynamically scales between players. The approach here seems to be to create a fake camera for all players, then separate ones for groups of players. Ideally, I want to combine this with this

    https://www.shadertoy.com/view/4sVXR1

    The final function is the key part really - the shader iterates every pixel in screen-space, works out which "camera" it belongs to and renders that pixel from that camera. To prevent using 4-5 full colour render targets in UE4, I guess I want to inject something similar into the UE4 renderer?

    Any clues on where to start? Paging [MENTION=404]DanielW[/MENTION] and [MENTION=3692]RyanB[/MENTION]
    Last edited by TheJamsh; 02-10-2017, 09:46 AM.

    #2
    The voronoi part is really easy. You just need to get the position of each character as a parameter. Then for each screen pixel you get distance to each, and you find out which is closer. Then based on which is closer, you mark a solid color down. Then you use that value as the index of your two render targets.

    yes this does seem like a hugely wasteful approach. Your render targets will have to be rectangular. You could optimize them to fit the exact extents of the voronoi shapes (but they'd still be rectangular), but since you will have to support the case where the split is 45 degrees, you need to make it run fast enough in that mode so you might as well wait until the very end before trying tricky optimizations like sizing the render targets to the voronoi extents.
    Ryan Brucks
    Principal Technical Artist, Epic Games

    Comment


      #3
      [MENTION=3692]RyanB[/MENTION]

      Yeah and I can imagine that resizing the render targets per-frame would also have some inherent cost of it's own too, it might end up being faster to just have four full-size render targets and capture components for each of the players and allocating them at startup.

      I wonder if I could do this by still creating the full-size render targets, but only part-filling them and then masking between each using a post-process shader or similar. The expensive part is computing the final colour of each pixel in the RT (I guess), so if I could somehow pass a stencil to the scene capture which says "don't bother with these pixels" - I could probably save a big chunk of rendering time perhaps?

      In an ideal world, I want to support up to four players for local split-screen. I'm not sure if a 2-bit texture format exists, but that would be enough data to store a stencil mask for four players (00, 01, 10, 11). My approach would be:
      • Render the stencil mask on CPU before main render thread runs.
      • Clear the four render targets to black
      • Pass stencil mask to each scene capture, and have them only capture the required area.
      • Blend between each of the four render targets using the existing stencil mask, then draw to screen.


      Is this something that the engine would support natively? I guess the only problem with this approach is that post-processing would be run on the entire screen, not each individual render target, so there could potentially be leak from bloom etc. If post-process is applied before the render targets are drawn to screen, it might be easy to get rid of that by multiplying the processed render target with the stencil mask...

      Mostly thinking aloud here. Is this something that seems viable? Not sure where to even start looking

      Comment


        #4
        Time for a bump. Started working on this and *sort of* have the basis of it working. My player clustering / Voronoi method isn't the best though...

        Currently the manager creates a render target and scene capture for each voronoi chunk, and updates it only when that chunk is visible to save performance. Unfortunately, rendering even a simple scene four times (at full resolution, no less) is way too expensive, so this method isn't really going to cut it - especially if I want it to run on PS4. FPS drops below 60 on a GTX980 on my machine. Even if I trim the render target size to fit within the chunk bounds, I'm still probably not going to save a lot of performance and there will be edge cases where it will still drag behind.



        Basically even at this stage, it's easy to tell that creating separate render targets is gonna be too much. RT's also don't support a few rendering features like Motion Blur etc.. so this is a complex problem. The ONLY workaround I can think of here, is to continue to generate the screen-space Voronoi mask... then when rendering the GBuffer, the renderer needs to figure out which camera to render each pixel from based on the Voronoi mask. There must be a way to do this right?

        Unfortuantely, that last part is way outside my area of expertise - so seeking help here!

        Comment


          #5
          Making tighter frustum planes for cameras would help with geometry pressure.

          Comment


            #6
            Looks interesting as technical implementation, but this is weird in game.
            This was used in Renegade Ops for 2 players coop, and it's really useless.
            Every second you think about where is your character now. Dynamic screen like that confuses.
            I'd rather use some static way split screen.
            Rocketeer

            my portfolio
            my youtube

            Camera Volumes System
            Procedurally Instanced Meshes
            Simple Portals
            Water Flow For UDK
            Setup Swarm

            Comment


              #7
              Static split-screen doesn't really work for this game, there's a design reason for implementing it as Voronoi instead. The location of players relative to other players is one of the most important aspects of the game. At the moment though, the render target method seems like the only potential approach - but that limits the game to 2 players at the very most.

              This Presentation shows how it can be made to work very well, and keep boundary points reasonable.

              Comment


                #8
                Alright so original idea wasn't working too well and Voronoi for more than 2 players is tricky... so I just got it working for 2 players first. I'm using a single Render Target this time, and splitting view via a post process blendable that blends between it and the default player camera.

                Render Target is still costly but this is probably totally useable for two players. Not had a chance to run on PS4 / Xbone yet, will be interesting to see how it copes (effectively, I'm doing 2K :/)



                Still keen to get away from Render Targets though and save processing every pixel for both the GBuffer and RT.

                Comment


                  #9
                  Some of the stuff you work on is beyond cool James!

                  Comment


                    #10
                    [MENTION=155]TheJamsh[/MENTION]
                    This is really cool!

                    If you don't mind using a custom engine build I have a couple of ideas on how you can laverag the scene viewa to achieve that.
                    Basically vr rendering is a simple case of what you wand including masking out dead areas.
                    I've been looking for a good rendering pipeline challenge and I'll be glad to help.
                    Last edited by lion032; 03-06-2017, 12:50 PM.

                    Comment


                      #11
                      I think I have another hybrid approach using the split screen system and something that is based on vr rendering.

                      The key component is the ISceneViewExtension class.

                      So this is how this will work (general outline).

                      1) Create a new ISceneViewExtension derived class that will act as the manager and will be in charge of the screen split view port allocations.
                      2) Create a new ULocalPlayer derived class and override CalcSceneView function to check with the view extension if it should create a new view or not.

                      The key locations in the code you should look at are

                      FSceneView* ULocalPlayer::CalcSceneView( class FSceneViewFamily* ViewFamily, FVector& OutViewLocation, FRotator& OutViewRotation, FViewport* Viewport, class FViewElementDrawer* ViewDrawer, EStereoscopicPass StereoPass) in localPlayer.cpp

                      void UGameViewportClient:raw(FViewport* InViewport, FCanvas* SceneCanvas) in GameViewportClient.Cpp

                      and SceneViewExtension.h
                      Last edited by lion032; 03-06-2017, 12:50 PM.

                      Comment


                        #12
                        I would certainly rather use cameras, because the problem with Render Targets is they don't have Motion Blur and Anti-Aliasing. When the screen splits, you can clearly see the bloom affecting one RT more than the other for example.

                        [MENTION=317]lion032[/MENTION] - I don't mind using custom engine (gonna have to eventually for Console builds anyway, and I have a build machine on the way). Thanks for the pointers, I'll investigate that when I come back to optimizing this a bit more and trying to fix the disconnect between the two render targets. I'd have to essentially trick the engine into generating two viewports that occupy the same space - and I don't know if that's possible (yet). Is the GBuffer sized and rendered for the whole screen, or duplicated, made smaller and rendered for each eye independently?

                        Then it's a question of working out which one to draw where. Viewports are always square, so blending between at an angle will be tricky. I'd have to create some material functions in code that allow access to both GBuffers, not just the one.

                        The best way to save performance here, would be to pass the mask to the renderer directly. You can't do any culling as such for each view (as far as I can tell), because the projection matrix is still the same for either viewport (and square), but you could perhaps save on final gathering for the final scene colour, depth pre-pass etc. That's what I figure anyway...
                        Last edited by TheJamsh; 03-06-2017, 01:22 PM.

                        Comment


                          #13
                          Originally posted by TheJamsh View Post
                          I would certainly rather use cameras, because the problem with Render Targets is they don't have Motion Blur and Anti-Aliasing. When the screen splits, you can clearly see the bloom affecting one RT more than the other for example.

                          [MENTION=317]lion032[/MENTION] - I don't mind using custom engine (gonna have to eventually for Console builds anyway, and I have a build machine on the way). Thanks for the pointers, I'll investigate that when I come back to optimizing this a bit more and trying to fix the disconnect between the two render targets. I'd have to essentially trick the engine into generating two viewports that occupy the same space - and I don't know if that's possible (yet). Is the GBuffer sized and rendered for the whole screen, or duplicated, made smaller and rendered for each eye independently?

                          Then it's a question of working out which one to draw where. Viewports are always square, so blending between at an angle will be tricky. I'd have to create some material functions in code that allow access to both GBuffers, not just the one.

                          The best way to save performance here, would be to pass the mask to the renderer directly. You can't do any culling as such for each view (as far as I can tell), because the projection matrix is still the same for either viewport (and square), but you could perhaps save on final gathering for the final scene colour, depth pre-pass etc. That's what I figure anyway...
                          [MENTION=155]TheJamsh[/MENTION]

                          The manager will be responsible for creating a mask for each viewport this mask can be fed into the pre pass rendering and even if the viewport is a rectangle only the unmasked pixels will pass the mask.
                          Doing this for each viewport will fill the final rendertarget with the correct view ports without overwriting each other.

                          The GBuffers are sized for the whole screen AFAIK and during rendering each view sets it's correct viewport.

                          No, each view has it's own projection matrix.

                          Comment


                            #14
                            Have you had any luck passing a mask to the Pre Pass stage of the renderer then? That's where I'm stuck, since i don't think it's possible. Creating the mask is easy.

                            Comment


                              #15
                              It shouldn't be a problem, did something similar at work.

                              Comment

                              Working...
                              X