Filling the GBuffer from two different view locations, with a stencil mask?

I’m trying to create a system for angled Split-Screen, and I believe I can leverage the engines existing system to my advantage. I’m hoping that might even be able to benefit from Instanced Stereo Rendering. The game is top-down, and I have two players in the world. As the distance between the two pawns grows, the single view-camera moves up higher in order to capture both players. At a certain point, the camera splits off into two views and follows players around, until they come close enough that they can be seen from one camera again.

I have this working with a Render Target. However RT’s have some significant issues. First of all, I have to render the entire scene twice. Second issue, is that Anti-Aliasing doesn’t work for Render Targets at all. This poses a huge problem for my game, since running without AA is simply not possible. To better explain it, here’s a video of it in action. This DOES NOT show the issues with no AA very well - but in a more complex scene they are extremely visible and so this system will not be shippable. Bloom is screwed, the view is incredibly aliased and flickering is too intense. It’s also far too slow for Consoles even in a simple scene.

I now want to try and change this system so that instead of using a render target, I actually fill the GBuffer from the two different viewpoints. Although I am certain there will be some artefacts from screen-space processing around the split area, some AA with small artefacts is much better than none at all. I can also always try switching to the forward renderer and using MSAA instead in the future. It should also be much, much faster - because this solution is way too slow.


First however, I need to get to the stage of filling the GBuffer. What I want to do is generate a render target that will essentially be the “Mask” between the two zones in screen-space. Essentially it’s a stencil. This should be easy enough, if anything I can do this with DrawMaterialToRenderTarget and leverage a material for it.

What I then want to do is send that texture to the Renderer, and the GBuffer will encode each pixel from a different viewpoint depending on the colour of the render target. As far as I can tell, this needs to be done in two passes, since everything in the scene needs to be transformed from one viewpoint to the other. As far as I can tell, HMD’s basically do exactly what I want, only they only draw two rectangular chunks instead of two arbitraily shaped quads.

So my question is - has anybody done this at all, or does anybody know how I could leverage the HMD system, so that I can create an angled split-screen display instead of two side-by-side viewports? The Angled-Split is crucial to the games’ design - I really don’t want to have to revert to regular split-screen.

I’ve looked at ISceneViewExtension but have trouble understanding what it’s actually doing. Additionally, I’ve studied DeferredRenderingCommon.usf and in particular the ‘EncodeGBuffe’ function - but I can’t work out how to make it read from a texture and choose to whether to skip a pixel or render it based on a texture lookup. I’m hoping that by adding that into the Renderer, I’ll skip a huge portion of render time by not calculating the final colour of unused pixels.

Anyone able to provide any help or pointers?

On answer hub I found only one answer by Wright that explains usage of strncil.

It corresponds to recursive mirror/portal rendering, but maybe would be heplful for you.

Thanks for the find, doesn’t seem to be much info in there unfortunately though.

This is almost all info about stencil in community ))

I’ve messaged some of the key graphics guys. I know GalaxyMan played around with Stencil Buffer rendering a while ago but lost his source code.

I can pay for someone to give me a hand with it, but I’d like to understand how it’s done. There are plenty of use-cases for something like this, the most obvious one that springs to mind is portals etc. (but I’m not interested in that right now).

Hi Jamsh.

Did you figured out how to work with stencil buffer?

I made some small investigation about, and for now I think it requires to create custom renderer, to pass custom rhi cmds list to it.

I haven’t had any luck so far unfortunately, every path I follow leads to a dead end of hits the limit of my knowledge of the renderer :stuck_out_tongue:

I have however put out a request for a Graphics Programmer as a bit of paid work. Naturally if I can get someone to do this, I will share with the community too.

Hi,
While investigating this problem my first guess was also to try stencil. It’s exactly what stencil is made for after all.
But as usual the graphics pipeline is quite limiting in unreal. After my tests it seems that stencil in unreal cannot be used as an actual stencil (I mean like this: Learn OpenGL, extensive tutorial resource for learning Modern OpenGL ) but only as a sort of area buffer (which is quite strange IMO…)
The best I could get is object silhouette through a transparent objects.
So to get the effect I will need to go to the unreal’s geometry pass code and add a way to cut the image.
I can go to 2 strategy:

  1. write an actual stencil cut feature
  2. adding an “if () { discard; }” somewhere in a shader code
    I tend to prefer the second approach… (less code, more contained, easier to test, less unreal stuff to understand etc…)

Today I dived into unreal code to make unreal to not render a part of the screen that was pretty easy:
I opened “BasePassPixelShader.usf” and spend some time to understand what epic did and endup adding this line:


if ((MaterialParameters.ScreenPosition.xy).x > 0.5f) { discard; }

line 579
and as expected half of the screen is not rendering meshes anymore.
I was a bit worried about how well the keyword “discard” will pass though the cross compiler but I tried on linux and the cross compiler usf->glsl worked without any problem warning or anything.
I was also worried that the lacks of information would broke a lot of things but it appear that the epic guys made the rendering hierarchy in a smart way (or I’m just lucky).
The next step will be to call “discard” according an user input (simple float, keycode, texture etc…) and this look like a bit harder than expected because it look like this shader wasn’t really designed to get parameters from anywhere…
PS: I don’t really understand everything (yet) about how the C++ brings parameters to shaders in unreal… I hope the OpenGL RHI call “glUniform*” functions that would be a great place to start.

We need @DanielW and @RyanB in here! Writing a Stencil Cut Feature would definitely be the most flexible way to go, but I guess the other half of the problem is using that mask to work out which camera to fill the GBuffer from… It’s a shame we can’t use the actual stencil buffer, since in theory if we used that in the “masked” mode we could get up to 255 different cameras on screen!

@NeWincpp It might be worth looking at the HMD Devices (VR Rendering) - since they essentially support writing to the GBuffer from two different locations, based on a mask (I believe). Perhaps going down that route could also allow us to benefit from things like instanced stereo rendering etc.

I don’t know if this is useful to you btw:

Stencil is used for per-pixel culling in various engine passes so it’s not available for general purpose masking.

The lack of Temporal AA in scene captures can be fixed and in fact I believe it has already been fixed in main (future 4.17).

Doing two separate scene captures for the views is always going to be slower than splitscreen though. If I were working on this, I’d be tempted to try this approach:

  1. Fix Temporal AA in scene captures
  2. Setup a scene capture with regular splitscreen, where each screen encapsulates the visible area of your angled splits. The scene capture output size will be larger than the final screen size.
  3. In the main scene, skip all rendering and just composite the regular split screen texture into angled splitscreen in a post process material.

I haven’t actually tried this so not sure where it would fall down. It would end up shading unseen pixels in the scene capture, but at least you get to use splitscreen rendering instead of two separate scene captures. Of course this requires some code changes, any solution will as angled split screen is not supported out of the box. It looks cool though, hope you get it working!

After thinking a lot about this I think I have a good strategy:
“discard” cancel the writing on the pixel where it is called so my idea is to loop though the position with a different “discard mask” on the very same GBuffer. Said in pseudo code:


void Tick(float deltaTime) {
  for (auto p : players) {
    discardMask.set(p.mask);
    VirtualCamera.SetPos(p.CameraPos);
    FillGBufferOnlyWithDiscard(discardMask);
  }
}

pro:

  • This is cheap in memory (even 0 if I find an unused bit somewhere)
  • because it doesn’t use RT we shouldn’t have problems with motion blur and TAA.
  • players are limited to 1 per pixel in theory (in practice it will be limited to the number of rasterisation/drawcall possible in 16 or 30ms)

con:

  • not cheap on CPU (drawcalls are multiplied by the number of player and instanced rendering won’t work without loosing most of the pro)
  • the GPU rasteriser can become a bottleneck (if you are on PC/PS4 I think you will runout of controller before it can become a problem)

My biggest unknown is I don’t know if it’s possible to ask for multiple draw to the same GBuffer on the game side.

VR is basically splitscreen which are viewports and viewports can only be rectangles (if you are interested there is a good introduction about that here )

Ahh interesting. I mean if it only scales to 2 players right now, that works for my implementation! The interesting part though would be scaling it to other things. I’ve seen so many requests for things like stenciled portals and rendering objects with different clip planes etc…

Yeah I wasn’t sure about VR, I did look at the source but it seems very hardcoded to work with two vertical viewports. 's solution seems interesting. I believe what he’s suggesting is to avoid using RT’s altogether and hijack the split-screen viewport camera / scene capture and composite the image in a post-process material (not far off what I’m doing right now). I imagine that if Render Targets can in fact support TAA (apparently there’s a fix for that in Main branch, I’m gonna take a look), then Motion Blur etc should also work. I believe they both rely on having those per-pixel motion vectors…

@DanielW thanks for the response! I’ll see if I can find the commit for motion vectors in render targets too. Any idea what to search for or where it may be?

I’m currently blending the render target and final scene output in a Post-Process Material like you said - so maybe I can hijack the split-screen system and resize the render targets to the split size each frame. Mind you at that point allocation / RAM speed might become the bottleneck with all that resizing…

Btw, DarkMaus is awesome!

I’m back to that work…
That’s my first patch in unreal so I’ve passed a few days just reading the code to understand what interact with what and how… And I need some help.
I don’t understand how UCameraComponent interact with the BasePass.
I’ve read a lot of files like ScreenRendering.h SceneCore.h/cpp ActorComponent.h but I can’t find any way to add a Texture (or a RenderTexture) that a user can see and modify in a UCameraComponent that is read in BasePassPixelShader.usf

It’s like the scene logic (with Actor/Component stuff) is 100% separated from the the renderer (RHI/Shader stuff).

I just wanna call glBindTexture ;__; please help.

PS: A Texture/Player will be a tiny bit heavier on the CPU on OpenGL and DX11 because of the state modification and less flexible but a RenderTexture will burn the bandwidth that you want to save.

The split screen is per view rendering to the same scene render targets.
You can allocate scene render targets per view instead of per scene.
Although you render to different targets you still do it in one deferred renderer render pass.
The major disadvantage is that you will need to switch a lot of render targets between drawcalls.

Look for FViewUniformShaderParameters, I don’t remember the exact file it is declared in but it some where around scenerendering.h

That helped but I still don’t see how I can reach the FSceneView stuff from UCameraComponent.
I found one access:
UCameraComponent -> UDrawFrustumComponent -> FPrimitiveSceneProxy (via ::CreateSceneProxy()) -> GetDynamicMeshElements use FSceneView.
But this “DrawFrustum” look like a debug primitive or something like this so that doesn’t look very safe.
In addition I expect Unreal to do something like getting the active camera properties and set the sceneView with it.

I wonder if I should make this link myself ? using stuff like
“FVertexShaderRHIParamRef VS = GetVertexShader();”
then
“SetShaderValue(RHICmdList, VS, InViewProjection, ViewProjection);”
Like is done in the StereoLayerRendering.
That would be a start for a POC of the “discard mask”.

Or if I should find how unreal get the active camera component and set the sceneView… Which will be obviously better if I could find how it done that.

Even “grep SceneView.h * -R” or “grep UCameraComponent * -R” didn’t give me any good results =/

Any luck @NeWincpp ? Sorry I wasn’t notified of replies to the thread for some reason.

@TheJamsh Well I found and understood a lot about the renderer of unreal but the engine is too big for me. I have less than 2y of professional experience and I’m fully alone (like I can’t ask questions when I don’t understand anything) so this task is very hard for me.
My status now is:
I understand how the basepass shader is called.
I understand how object are rendered and controlled from the logic (minus some wierd function that almost look like duplicate)
I understand that the modification should be put somewhere in one of the viewport codes (but I don’t know which one)
I understand that the camera logic doesn’t really exist in the render hierarchy (like I expected) The viewport does everything alone.

I do not understand where the viewport is getting the UCamera datas or a proxy.

And I had important work for my own company so I had to put this in pause for some days… So I could make a full detailed report and let you finish this or finish this with some more “direct” help (like a private channel on a IRC or discord)
I seriously didn’t expect that understanding PBR with raymarching for my own 8k demoScene was easier than making this patch. That is VERY frustrating =/

Alright so, an update on this - though it’s not the one I was hoping to give. I’m about ready to give up on this tbh… but it would suck if I had too.

In 4.17, support has been added to Scene Capture Actors to allow them to add motion blur and temporal AA into the render target. Unfortunately, this only seems to work with Final Color (LDR) mode, and it still doesn’t seem to give the same result as the Regular Viewport.

Now I have been tweaking settings for what feels like an eternity, trying to get the colours to match using LDR mode. I was wondering if @DanielW had any insight on this, because in LDR mode nothing I try seems to be able to get the same result.

When using Scene Color HDR capture mode, the colour space is perfect (but note, lack of any anti-aliasing and motion blur in second viewport.

When using Final Color LDR mode, I get anti-aliasing and motion blur - but the image is way too dark and for some reason, VERY blurry

I feel like I’ve tried every setting imaginable, including changing the Post-Process blend mode of the Material to Before Tonemapping and After Tonemapping.



			// Create the Post-Process blend-able for output
			VoronoiBlendable = UMaterialInstanceDynamic::Create(VoronoiBlendableAsset, this);
			ASSERTV(VoronoiBlendable != nullptr, TEXT("Invalid Voronoi Blendable DMI"));

			// Create the Capture Target for the Second View Target.
			// We have to use the world as the outer, otherwise it won't render
			VoronoiCapture = NewObject<USceneCaptureComponent2D>(MyWorld, USceneCaptureComponent2D::StaticClass());
			ASSERTV(VoronoiCapture != nullptr, TEXT("Invalid Voronoi Capture Component"));
			VoronoiRenderTarget = NewObject<UTextureRenderTarget2D>(this, UTextureRenderTarget2D::StaticClass());
			ASSERTV(VoronoiRenderTarget != nullptr, TEXT("Invalid Voronoi Target"));

			VoronoiRenderTarget->bHDR = false;
			VoronoiRenderTarget->ClearColor = FLinearColor::Black;
			VoronoiRenderTarget->InitCustomFormat(ResX, ResY, EPixelFormat::PF_FloatRGB, false);
			//VoronoiRenderTarget->bForceLinearGamma = true;
			VoronoiRenderTarget->TargetGamma = 1.3f;                          // try to match scene gamma? IDK anymore...

			VoronoiCapture->TextureTarget = VoronoiRenderTarget;
			VoronoiCapture->bCaptureEveryFrame = false;
			VoronoiCapture->bCaptureOnMovement = false;
			VoronoiCapture->CaptureSource = ESceneCaptureSource::SCS_FinalColorLDR;

			// Well this is stupid, FEngineShowFlagsSetting doesn't have a suitable constructor -.-
			FEngineShowFlagsSetting Setting1 = FEngineShowFlagsSetting();
			Setting1.ShowFlagName = TEXT("TemporalAA");
			Setting1.Enabled = true;

			FEngineShowFlagsSetting Setting2 = FEngineShowFlagsSetting();
			Setting2.ShowFlagName = TEXT("MotionBlur");
			Setting2.Enabled = true;

			VoronoiCapture->ShowFlagSettings.Add(Setting1);
			VoronoiCapture->ShowFlagSettings.Add(Setting2);

			VoronoiCapture->SetWorldLocationAndRotation(FVector::ZeroVector, OrbCameraRotation, false, nullptr, ETeleportType::None);
			VoronoiCapture->FOVAngle = DefaultFOV;


Is there something I’m missing here or is this still effectively impossible?

What I’m saying is probably dumb but have you tried to use the forward renderer instead of the deferred one ? I don’t know how it work internally inside unreal but in a deferred rendering most effect are applied from GBuffer texture (and it seems that RTT is your problem) so maybe the forward pipeline will help. In addition using the forward rendering you should be able to enable the driver-level MSAA so even if you don’t get the motion blur at least you will get your AA.
(and maybe use this cool effect to replace the motion blur: GitHub - LuggLD/SmearFrame: Unreal Engine 4 smear frame material effect ¯_(ツ)_/¯ )

PS: I don’t know what I’m talking about here I’m just proposing an idea that might worth the try.