[Odessey] Creating my own G-Buffer in UE4

There is a new feature in 4.9 that allows Custom Depth with Custom Stencil. It allows another buffer to be accessed called Custom Stencil that allows each static mesh to have a single Byte from 0-255 to be sent to that buffer. This can then be used in the shader to mask props as different colours when being highlighted for example.

Are there any benefits to the CustomTagBuffer method? I understand that it does allow material to define their own value in different places across the mesh as opposed to one entire value for each mesh, but I can’t quite think if it’d have enough advantages over the Custom Stencil method.

Hello Dec1234!

Yes, I was also very happy to see that they added the custom stencil buffer!

It seems to be enough at first glance, but it is unfortunately quite limited in what you can do with it. The resolution as far as I can see only supports one byte as you said, which makes applications some applications impossible.
Some other sample things that cannot be done could include for example (just off the top of my head):

  • Pick buffers (using a custom buffer for mouse selection for example)
  • Complex techniques that require per vertex or other regional output
  • Scenarios where you have need for more than 256 unique colors
  • Gradienting

“but I can’t quite think if it’d have enough advantages over the Custom Stencil method.”
Well, it is much less about one vs. the other, it is just about what is enough for a particular method or algorithm. The project I’m working on right now for example will not work with only the stencil buffer.

Most of the techniques I mention in bullet point up there though could be achieved simply by extending the pixelformat size of the stencil buffer, but I also have this side project as an intro into how to add other buffers in general and how the renderer works.
The problem with this is that it is actually part of the depth buffer! They basically changed the depth buffer from what I assume was a 32bit format to a 24bit-depth 8bit-stencil combined buffer. So if you would like to just change that implementation you would get a lower precision/range depth buffer. I hope you see my dilemma :slight_smile:

If you just want differently colored outlines though, the stencil buffer will be enough for most scenarios :slight_smile:


Just for the fun of it, I’m fiddling a bit right now with seeing if I can get the depthbuffer to be written to a 64bit pixel format (32bit depth, 32bit stencil). That might solve my problem partially, but I’m guessing I would have to change quite a lot of shader code for that. But then again, maybe not. I’ll probably fiddle with it this evening. If I get it working I’ll be sure to post :smiley:

Best regards,

Ah yes, some very good points and ideas for me to think about. I was mostly asking for personal reasons to see if there was anything specific that would make the Custom Stencil method not work for me. I think I can use some fancy math to get a few different outline/glow/highlight elements into a single shader, im not looking for anything fancier than that for the time being.

Also, I did notice the lower quality when looking at the buffer visualisation for the Custom Stencil but did not see any issues when I tested the Custom Depth. Im sure there is some precision issues if they have split the depth up. I have yet to take a look at the source in 4.9 and see how it’s done.

Thanks for the reply!

I’ve been reading through the DX11 spec, and fiddled, and even googled some if someone has tried to do this, but I don’t think it’s possible.
I haven’t been playing around with stencil buffers for years, but the formats are basically still the same now as the ones from 2002 xD.

I was most hopeful about the 64bit format DXGI_FORMAT_R32G8X24_TYPELESS, and it might still be possible with that format somehow, but there is no trivial solution as far as I can see.
It seems it is intended for just trading performance for a 32bit depth buffer and yet again an 8bit stencil. The last 24 bits are unused. I suppose the reason why we don’t have 16bit or 32bit stencil buffers is because of either driver level, or silicon level reasons :slight_smile:

Anyways, it seems like I will have to use my tag buffer after all.

If anyone know how one might coax that format, or another format to support 32bit stencils, I would love to know, but until then I wish you my best regards,

Hi ,

I was attempting to use your patch with 4.9 preview 4 and it did not apply. Can you verify that it works, I wish to trial it!

Hello iFire!

I just tried fetching 4.9 and reapplied my source patch to it, and it works fine for me.
What problems are you having?

I pushed a new branch for preview 4 if you want to try that one:
https://.com//UnrealEngine/tree/4.9CustomTagBufferPreview4

Otherwise you can just get the latest 4.9 and cherry pick my commit [e00a3c35d176a6d882b422a43f0c84cee89fddc8] (easiest with format-patch / apply-patch so you don’t get origin errors)

Best regards,

Sorry this may be offtopic. But how would you do the masking in the material for the CustomStencil? Thanks!

Great work and research in this post.

Any ideas on how to create a true (RG/GB) backwards forwards motion vectors (velocity pass) buffer with what you learned ?

Hi everyone,
Thanks a lot for sharing your work . I currently have a problem and I feel like the solution would be to create a new GBuffer and use it, not in post process, but in the light pass in unreal. I would need to modify the light shaders so that they use the new GBuffer. Is it possible? Or is the new GBuffer only accessible in post process?
I would like to do that because I’m working on a project where 2 different worlds coexist at the same spot. But you see only one world around you. You can open sphere shaped portals is some areas of the world, and in the sphere, you see the other world. To display one world outside the sphere and another world inside the sphere, the objects in the world have a material that knows which world they are supposed to be displayed in and use opacity mask (0 or 1) to hide and display.
But now i’m facing a problem. I would like to use unreal amazing lights, but I want these lights to belong respectively to one world. That’s why I want to create a GBuffer containing which pixel of the screen belongs to which world, and then apply lights from the first world only in the areas of the screen where you see the first world, and lights from the second world only in the areas of the screen where you see the second world.
Do you thing your GBuffer could help in my situation? From what you understood of the engine while making the custom color tag buffer, do you think what I am saying makes sense?
Thank you for your help.

For portal-type stuff it may be more effective to create cameras and render targets for each portal target. This is far simpler than using a new custom gbuffer.

If you’re after the lighting solution, you can probably take postprocess0 and subtract the base colour buffer from it, although I just tried it and it looks a little rough.

Thank your for your answer. The portals I am making are not flat, they don’t teleport you to somewhere else. They are spherical and the 2 worlds are physically placed in the same spot in the scene (if you display all the objects at once you see broken stairs (world 1) at the same place at new stairs (world2)). I don’t really see how the render targets can help me in my situation. I can also get into the portal by going in the sphere (meaning I interact with the other world), but I never teleport, I don’t need to because the 2 worlds are in the same exact place.
I’m not sure I understand well the post process solution you are suggesting. So you are saying that I use a post process material, take the output color from all the passes, subtract the base color to it. Do I end up with the light color only? Then I add the light color back only where I want? If that is what you are suggesting, I’ll be running into another problem : I need specific lights to affect the world 1 only and other lights to affect the world 2 only. I don’t really know how to deal with that.

Have you seen the Heroes game? It’s done in UE4 and has two worlds occupying the same space, sort of. You can see between them and switch between them as necessary. It’s a bit hacky though.

What you’re talking about aren’t really portals in the sense that everyone understands them to be now. I don’t think that any of the suggestions to date will help you.

Can you send me a link to a video/website to that game please?

http://lmgtfy.com/?q=heroes+video+game

Thank you, I found this video - YouTube and at 0:34 we can see the worlds swapping. I think it can be done with a trick of keeping the last rendered frame from the old world and start rendering the new world gradually depending on object depth.
What I need to do is have the 2 worlds rendered at the same time, and in spheres that you see entirely you see the other world, and you can get in the sphere.
Thank you for sharing that game though, it’s an interesting “portal”/“world swap power” example

Yeah i wasn’t sure if it was exactly what you’re after. I think the concept of what you’re trying to do is tricky: can you sketch something up demonstrating what you mean?

I’m not sure what you mean by a sphere, but I imagine you’re saying you want a broken level and a fixed level overlapping. Do they NEED to physically overlap?

Do both contribute to collisions, or can just one do? How will you solve polygons in the same place overlapping and z-fighting?

What is the mechanism by which each world appears? Are they switchable? If not, isn’t that just a single static level?

If it’s a switchable visual effect and you can store all of the collisions you need in a single world, you could do two separate levels with a matching render target in the secondary world whose movements match the player camera.

Then in a post-process layer you could draw the second world’s render target and filter by depth so that occlusion is maintained properly. Then you can handle effects on each render target individually. That’s exactly what heroes does btw, except it also has a mechanism to switch the main camera between worlds and it doesn’t try to z-sort pixels based on depth.

The downside here is you’re rendering things in two passes, so you’ll have to work that into your frame budget. I might try to do a quick mockup for you since it sounds like fun.

Here is what I drew for you.
In world 1 there is a boat and the sea, in world 2 let’s say the sea dried out and it’s a desert.
I didn’t draw the character.
It can move freely through the portal, get in the boat, swim in the sea, climb on the rock that’s in the boat.
You collide with what you see. You can switch the worlds.
Here are 2 versions, to show the switch.


This picture is the world 1 outside the portal and the world 2 inside the portal. You see some kind of desert rock in the portal.

This picture is the world 2 outside the portal and the world 1 inside the portal. You see the middle part of the boat inside the portal.

I’m not sure what you mean by a sphere, but I imagine you’re saying you want a broken level and a fixed level overlapping.
I want 2 full levels because I can switch between them.

Do they NEED to physically overlap?
I think it is easier for level editing to make sure things are connected between the 2 worlds when a portal is open (here it is badly shown but i want the boat and the rock to be at the same spot so that the player doesn’t drop when it switches worlds, or i want to walk from the boat to the rock without falling in between when I cross the portal’s border).

Do both contribute to collisions, or can just one do? How will you solve polygons in the same place overlapping and z-fighting?
Both contribute to collisions, anything displayed collides. But I only display one world at one position, the middle of the boat won’t collide when it is hidden, the rocks will collide instead because they are displayed. So no polygons are overlapping.

What is the mechanism by which each world appears? Are they switchable? If not, isn’t that just a single static level?
When you open a portal it goes from a tiny dot to a sphere the size of the portal (size will depend on the needs, the portals you see on the drawings are fully open). For now portal are shrines that opens when the player presses the action button, then when the player presses it again it switches the worlds.

What heroes does sounds interesting. I’m going to read what you wrote a few more times to try to understand better what you meant.
Another thing, my game is third person view so I guess I need a dummy player in the other world as well if I choose that solution. And it would need to match exactly what my actual player is doing. Because the post process pass you are talking about draws over the first world right?

Thanks a lot for helping :slight_smile: I feel super motivated!

You want a solution exactly like how heroes does it. It has two separate worlds set in different time periods of the same environment. You can press a key to throw up a rendertarget-based portal and another key to switch location between worlds. I’d go buy a copy of it as it’s quite cheap and you’ll get a chance to play with it. They do collision tests at either end and show a black portal if you’re inside a model, etc.

The heroes portal is linked to your view but it doesn’t have to be.

So ideally you’d have a 2d capture component that matched your player camera movements. Then you could project that texture on to portals, etc. I’ll try that next since I’ve been meaning to for a while.

The first thing I decided to try though was a cube capture component, giving you a portal a bit like the artefact in Sphere:

So in this setup the portal is like a metal ball reflecting the other location.

If I switch to a 2d capture component it’s going to look like a hole in space instead, eg. like Portal the game. Trying that now.

Here’s the Portal style portals:

Both look good to be honest. I’ll zip up the level and upload it in a moment.