[Odessey] Creating my own G-Buffer in UE4


You can find my fork at:
on the CustomTagBuffer branch (4.3)
and 4.4CustomTagBuffer (4.4)

My next adventure in UE4 will be in the Rendering part of the code landscape. I’ve been fiddling around with post processing effects, in particular outlines. As such, I set my next goal to be to create a PP-material that will be able to draw outlines for arbitrary actors. I also want to be able to have different colors for different actors in the same render. There already are some resources on how to achieve a simple post process outline material (one is in the content sample project, I found another here:
Tutorial – Creating outline effect around objects | Unreal Engine 4 blog ) but both only lets you outline everything in the same color, boooring!).

I’ve seen suggestions that this is possible by attaching different post processing volumes with their own materials to individual actors although I can’t seem to find the forum thread right now. I have not been able to get this to work though, and as far as I understand the rendering design in UE4, it is not possible by design. Although it might be possible to draw several passes, it is not possible to draw select parts of the screen with any post process material. I have also not been able to find any technique to let me draw select meshes/actors with a given material to a render target.

So my current working idea is to solve the problem by creating a new G-buffer, much like the custom depth buffer, that will give developers the ability to render actors with a solid tag color (instead of the depth information, which is the output in the custom depth buffer). Using this tag color buffer in my post process material later on will let me color each outline with its own unique color, by sampling the tag color buffer.

The ideal solution would of course be to create a subsystem that lets you define new G-buffers from the editor layer, driven by components that you add to actors to draw to the buffer, and letting you access these from a new post process shader input node, but I suspect that this would be far too ambitious for a first step, so at the moment, I’m in the process of duplicating (shiver) the complete pipeline for the custom depth shader and modifying it to support the new needs.
Ideally, at the end I would like to make a tutorial (or a fork) to let others leverage the code.

I imagine this is going to be a lot more difficult than it sounds though, so I welcome anyone who wants to help me unravel what needs to be done to make it work.

It is becoming clear that I won’t be able to make a tutorial of this, as the code needed is very similar to the original code, and posting that code in a public forum / wiki will probably get me into legal trouble. The problem with making a fork instead however is that I don’t think this is a feature that is ubiquitous enough for it to be in the engine by default (or maybe it is, not sure yet). The better fork would of course be a generalized system. Ah well, I guess I’ll cross that river when I come to it.

Best regards,

1 Like

If anyone is interested in helping out, I would recommend looking at these files:


And I needed to create duplicate files for the files that are specific for depth rendering, so I ended up with these new ones:

A new preview material needs to be created in:

These ini files also need to be updated to reference the new materials:

If you want to look at the original ones, just replace the “Tag” part with “Depth”.
Right now I’m trying to figure out the details of TagRendering.cpp (DepthRendering.cpp), as this seems to be the main meat of what needs to be customized.

I’ll update this list as I figure more stuff out.

Updated the file list. I don’t think it looks as complicated anymore. I’m surprised how badly incapsulated this code is though, I’m having to update registrations etc. in all kinds of places. But careful use of grep and find & replace actually has worked fairly well so far. I’m now at the stage of providing visualization of the buffer in the scene view mode. Apparently a few ini files also need to be changed, as well as a preview material created. Finding how the **** inis are created in the first place (so they are correct if regenerated) is something I have yet to do. My next step will be to make the tag buffer selectable from the SceneTexture: Node in the material editor…

I found even more files that needed to be changed after doing a third sweep for CustomDepth / Depth related code, and added the files to the list…


I think I’m almost there now. Most of my code seems to be executing, albeit with small differences from the CustomDepthRenderer. In my CustomTagRendering.cpp class, the viewrelevance seems to be static, while in the depth rendering, it is always dynamic. I hope that if I can figure out why my actors are static in here, I will almost be there :slight_smile:

– Update –

The viewrelevance is a dead end. Both my drawing policy, and the depth one keeps switching between dynamic and static. I’m not sure of the significance of this though… The only things the docs have to say about it is:
FPrimitiveViewRelevance is the information on what effects (and therefore passes) are relevant to the primitive. A primitive may have multiple elements with different relevance, so FPrimitiveViewRelevance is effectively a logical OR of all the element’s relevances. This means that a primitive can have both opaque and translucent relevance, or dynamic and static relevance; they are not mutually exclusive.

I’m still not sure why I’m not getting any output… Everything seems to be running more or less identically to the custom depth pass. I think it might have to do either with the RenderPrePass function in DeferredShadingRenderer.cpp or something to do with the fact that customdepthrenderer is not the only code that does depth rendering… maybe the actual render for the customdepthrenderer is done in a different (or in combination with a different) policy?

I think I have to give up for today, but I still have some stuff to try I guess…

I’ve pushed my current work to my fork if you guys want to check it out :slight_smile:
You can find it on:
on the CustomTagBuffer branch

These files are in gitignore so you can’t get them from the repo, and unfortunately I don’t think I’m allowed to zip and post them for legal reasons:

But the shaders are basically copy and paste from the depth versions (with an extra parameter for tag color in the pixel shader, the CustomTags.uasset is copy and paste, and then you need to change the SceneTexture in the uasset to point to the CustomTagBuffer instead (you can do this by copying it into a project and then editing it like normal in the editor, then copy it back to the correct folder) and the ini files you need to find the
[Engine.BufferVisualizationMaterials] part and paste this row:
CustomTags=(Material=“/Engine/BufferVisualization/CustomTags.CustomTags”, Name=LOCTEXT(“BaseCustomTagsMat”, “Custom Tags”))

under all of them (so the editor knows it can visualize the custom buffer with that material, this also makes it show up under the “Custom visualization” subcategory in the view mode combobox in the top left of the editor viewport.

I’ll get back to debugging now, but hopefully this will be useful to someone :slight_smile:


After looking at my rendering code one more time, I realized that the wiring was more specific to depth rendering than I initially thought. Using the BasePassRendering policy as an additional reference, as well as doing more debugging, it seems that the pixel shader in the customdepth rendering is not run in almost any case. This explains a lot of course, but it leads to the question how the rendering policy actually should be set up. I think I have a pretty good grasp of the general structure now, so rewriting the policy to always use my VS and PS probably will start getting me some output.

To clarify, I thought I would be able to get it to render to a non-depth buffer render target just by overriding the already existing PS, but that is not enough, the policy itself also needs to be tailored to the new use case.

It’s probably not as easy as just randomly rewriting the policy as I would like it, but I don’t think I’m that far off the mark either. It will be an exciting week whatever happens :slight_smile:
I think the biggest challenge will be to understand the inputs… for example a material that keeps showing up in my debugging is something called “WorldGridMaterial”, I’m guessing this is some global material, but I’m not sure.

— Update —

I’ve made additional strides in understanding the code. After fiddling around a bit with the shaders, I am now under the impression that the WorldGridMaterial is an umbrella material that contains all shaders under the Engine/Shaders subfolder. Crossreferencing with the basepassrendering policies, I also think I’m coming closer to what might be wrong. For example I was not registering the pixelshader for all materials, and I think the additional render-lists that the custom depth policy uses are probably not necessary for my buffer. These discoveries, and some other smaller ones has allowed me to simplify the code a bit more, which always is a good thing :slight_smile:
Unfortunately, I still don’t have any interesting output… but hopefully I’ll get there soon!


Hello again everyone!

Another weekend has arrived and I thought I’d spend some more time trying to figure this out. However I must admit I feel I am running out of ways to debug this. I will spend the day trying to fiddle around with already working render targets to try and gain more insights into how they work so I might be able to then make my new one come alive.

I thought I would summarize my current problem though in the hopes that someone might be able to provide insight, and in the worst case as documentation :slight_smile: My current progress is publicly visible on my fork listed in a previous post if anyone would like more context.

As you might know, my custom render target is set up by copying and modifying the code for the custom depth buffer. So my render process starts in DeferredShadingRenderer.cpp, to SceneRendering.cpp, on into my new CustomTagRendering.cpp and finally resolves in my Drawingpolicy in TagRendering.cpp.
I’ve currently set up my render target in SceneRenderTargets.cpp (it shows up in the Vis/VisRT console commands) and I’ve also exposed it through the buffer visualization combo box in the editor.
No matter what I do though, my output is always a black texture. I’ve tried flipping through most texture formats, tried turning depth stencils off with

RHICmdList.SetDepthStencilState(TStaticDepthStencilState<false, CF_Always>::GetRHI());

(I’ve tried cf_never too, not sure if the stencil is expects a positive or a negative to skip the fragment)

Clearing the buffer with:

RHICmdList.Clear(true, FLinearColor(1.0F, 0, 0, 1.0F), true, 0.0f, false, 0, FIntRect());

I have tried this on other buffers too (like basecolor), and it doesn’t work there either, so I can only assume that it is done at a later stage by the engine overwriting my call. Anyways, I thought I’d mention it for completeness.

My drawing policy declares a simple pixelshader, vertexshader and no-op hull and domain shaders:

IMPLEMENT_MATERIAL_SHADER_TYPE(, TTagOnlyVS, TEXT(“TagBufferShader”), TEXT(“MainVertexShader”), SF_Vertex);
IMPLEMENT_MATERIAL_SHADER_TYPE(, FTagOnlyPS, TEXT(“TagBufferShader”), TEXT(“MainPixelShader”), SF_Pixel);

My usf files are basically only pass through and forcing the output to be a certain color:


struct FLightMapDensityVSToPS
	FVertexFactoryInterpolantsVSToPS FactoryInterpolants;
	float4 WorldPosition	: TEXCOORD6;
	float4 Position			: SV_POSITION;
void MainVertexShader(
	FVertexFactoryInput Input,
	out FLightMapDensityVSOutput Output
	FVertexFactoryIntermediates VFIntermediates = GetVertexFactoryIntermediates(Input);
	float4 WorldPosition = VertexFactoryGetWorldPosition(Input, VFIntermediates);
	float3x3 TangentToLocal = VertexFactoryGetTangentToLocal(Input, VFIntermediates);

	FMaterialVertexParameters VertexParameters = GetMaterialVertexParameters(Input, VFIntermediates, WorldPosition.xyz, TangentToLocal);
	WorldPosition.xyz += GetMaterialWorldPositionOffset(VertexParameters);

	Output.WorldPosition = WorldPosition;
		float4 RasterizedWorldPosition = VertexFactoryGetRasterizedWorldPosition(Input, VFIntermediates, Output.WorldPosition);
		Output.Position = mul(RasterizedWorldPosition, View.TranslatedWorldToClip);
	Output.FactoryInterpolants = VertexFactoryGetInterpolantsVSToPS(Input, VFIntermediates, VertexParameters);

	OutputVertexID( Output );


void MainPixelShader(
	FVertexFactoryInterpolantsVSToPS FactoryInterpolants,
	float4 WorldPosition	: TEXCOORD6,
	out float4 OutColor		: SV_Target0
	OutColor = float4(1.0, 0, 0, 1.0);

Debugging the code, I can see my drawing policy being run on the correct actor/mesh with the WorldGridMaterial (since my usf-files are in the Engine/Shaders/ folder I assume this is the correct material as my current hypothesis is that all such shaders are compiled and loaded into this “master material”. Debugging for example the custom depth buffer functions reveals usage of the same material.

The draw operations all run without errors, and I can see that I get all the way down to the RHICmdList.DrawMesh hardware function calls.

My current hypothesi are therefore these ones:

  • My shaders are not doing what I think they are doing
  • My drawing policy contains other errors

Update: Disproven:
XXX The render target is actually drawn to, but the reference VisRT is referring to is actually not the correct one
XXX My draw operations are being cleared / drawn over after they are completed
XXX My render target is being recreated without my knowledge

The problem is that debugging all of these requires a graphical debugger. I have so far tried using RenderDoc (crytek), nSight, VS Graphical debugging tools and PIX, all of which either crash (all except renderdoc) or do not do anything (renderdoc) when I try to capture a log. I’m assuming this is because of how the editor is designed (Im guessing all these tools look for a primary rendertarget to analyze or similar, and the editor does not expose this in a conventional way to make it easier to iterate on your project).

The recommended way to debug this according to several posts on the forum is to output intermediate results into other rendertargets, but as you can see, I can not do this either since I still do not have access to drawing to any target so far.

I had another idea of just trying to draw a line using DrawPrimitive, but I abandoned that idea when I realized there is no simple way of binding shaders using RHICmdList without using the vertexfactory system. And as the point of the test was to make a minimal sample, it kind of defeated the purpose. I might revisit this idea though if I don’t get anywhere when playing with the other targets.

It is very unfortunate that UE4 does not support any graphical debugger as this would have made this a lot easier, even having a command that outputs all operations as a log would help me. I have tried using -d3ddebug from Graphics Programming for Unreal Engine | Unreal Engine 5.1 Documentation but this only provides extremely bare bone init messages and does nothing for getting a debug output on what is happening with my render target. It also makes the editor crash when running Vis x in the console.
I have also tried setting r.DumpShaderDebugInfo from Shader Debugging Workflows Unreal Engine | Unreal Engine 5.1 Documentation
but this only dumps shader compile and logs and still gives me no runtime information.

I think this summarizes my last 15~ hours in the engine code fairly well, but I might update the post if I remember more details :slight_smile:

----- UPDATE ----

I managed to get a different system to render to my render target without problems, so this only leaves the hypothesis that my shaders (or some other part of my drawing policy) contains errors. That definately cuts down on the work, but it also means that the remaining hypothesis is basically a black box since I’m still unable to get graphical debugging to work. I might return to trying to draw some simple geometry from my policy to start with. The other option is to debug yet more working engine code to try to figure stuff out. It will probably end up to be a combination of both :slight_smile:

Best regards,

Blogs should be added just for you!!! :slight_smile: really impressive documentation of your work, I’m sure many will find your this very helpful. Keep the work up!!!

Thank you, but unfortunately I haven’t really accomplished anything yet. Hopefully, I can make progress during the weekend and remedy this though :slight_smile:
If I can give back even a volume fraction of the amount of information that users such as you, and (and many others) have provided, it will be worth it.

On the other hand, maybe a forum is the wrong format for something like this. I first considered doing it on a blog, but since it is in the context of the UE4 open source project, I thought it might be more interesting to do it here on the forum as it might spur ad-hoc cooperation or other interactions, whereas that would most likely never happen if I did it on a different domain. It doesn’t really seem to have had the effect I hoped though, so it might be a good idea to do any further odysseys somewhere else :slight_smile:

I guess it all depends on the feedback though. I can certainly see people arguing that threads like this bloat the forum, and if that is the case I will definitely not repeat it.


Hello everyone!

I thought I’d summarize my discoveries the last weekend. Unfortunately I was unable to make much progress, although I have some interesting curiosities to share. I first went through the entire console command list for anything that might have been able to help me using the dumpconsolecommands command, unfortunately nothing pertinent turned up. I then went on to making additional tests with nSight, renderdoc and PIX. I was actually able to get a detailed call count statistics report with nSight, but in the end I wasn’t really able to use the data for anything as most information was geared towards profiling more than debugging. (you can do this with the nSight->Start performance analysis toolbar command, and then launch the process from there).

After that I went back to fiddling with the shaders. I thought I’d try to remove the LocalVertexFactory.usf dependency that I currently had (to my understanding this vertex factory is just one of many, so explicitly referencing methods in that usf-file is probably a bad idea). While doing this I actually managed to generate a meaningful error message when trying to output the rendertarget (with Vis) in the editor. It goes like this:

D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: The resource return type for component 0 declared in the shader code (UINT) is not compatible with the Shader Resource View format bound to slot 1 of the Pixel Shader unit (UNORM).  
This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching).  EXECUTION ERROR #361: DEVICE_DRAW_RESOURCE_RETURN_TYPE_MISMATCH]

D3D11: **BREAK** enabled for the previous message, which was:  ERROR EXECUTION #361: DEVICE_DRAW_RESOURCE_RETURN_TYPE_MISMATCH ]
First-chance exception at 0x000007FEFCE6940D (KernelBase.dll) in UE4Editor-Win64-Debug.exe: 0x0000087A (parameters: 0x0000000000000001, 0x000000002C737B30, 0x000000002C7399D0).

Decoding this error message tells me that:

The resource return type for component 0 declared in the shader code (UINT)
Something in my shader code, most likely in the pixel shader due to what the next part says is trying to write a UINT to one of the SRVs bound to the shader.

is not compatible with the Shader Resource View format bound to slot 1 of the Pixel Shader unit (UNORM).
And but the resource (texture) in that SRV is actually of UNORM type.

The second part makes a lot of sense. In my SceneRenderTargets.cpp I am indeed creating a rendertarget of type PF_FloatRGBA which is a UNORM format (floats in the 0-1 range as output). The first part is what I not understand though, scaling back to my original pixelshader I get the same error:

void MainPixelShader(FTagVSToPS FromVS, OPTIONAL_IsFrontFace, out float4 OutColor : SV_Target0)
       //I Commented this test code out just to see if I could get the same error with only a constant color out
	FMaterialPixelParameters MaterialParameters = GetMaterialPixelParameters(FromVS.FactoryInterpolants, FromVS.PixelPosition);
	CalcMaterialParameters(MaterialParameters, bIsFrontFace, FromVS.PixelPosition);
	const bool bEditorWeightedZBuffering = true;
	const bool bEditorWeightedZBuffering = false;

	//Clip if the blend mode requires it.

	half3 DiffuseColor = GetMaterialDiffuseColor( MaterialParameters );
	DiffuseColor = saturate(DiffuseColor);

        //This constant color set is the only thing that is supposedly being run... yet I get the same error
	OutColor = float4(1.0F, 1.0F, 1.0F, 1.0F);

As you can see, I have even specified to set the OutColor : SV_Target0 to a non-uint type. Googling around, I get the impression that this type of error is commonly seen when one is declaring a resource of one type and setting another, the most common example I could find is having a UINT surface format, but not declaring the Texture2D resource in the shader with template args indicating the type (like so: Texture2D<unsigned int>). However in my case, the feedback is telling me that I am trying to set the value using UINT. My best guess here is that something is happening behind the scenes. Maybe addition code is added to my usf-file on compilation? Some boilerplate code or similar? Maybe one of the macros/metadata wraps my code with something I’m not aware of.

The most ridiculous part is that if I change the surface format of my render target to a UINT type instead (for example PF_R16G16B16A16_UINT) the error message gets inverted! So in that case it tells me that my shader is trying to write FLOAT and when the resource in the SRV is of UINT type (in this case it is actually correct, but since I don’t seem to be able to coerce this in the shader to UINT, I’m equally as stuck. Since in this case though the surface format is actually 64-bit, I might be missing some additional flags that need to be set to make that work, but I’m not sure.

Whenever I get the time to continue looking at this I’m going to try:

  • reading the compiled shader assembly to see if the compilation step adds additional code.
  • examine all the shader-related macros to see if they can provide any additional hints of what is going wrong.

If anyone has any insight to provide, all input is welcome :slight_smile:

The boss has been defeated!

It took me a little over 30 hours to manage this, about 20 hours to poke around and learn the system, and after that the tweaking and bug fixing took maybe 10 more hours or so, which feels alright for something of this magnitude in an engine I am not familiar with. I’ll try to summarize the last phase to enough detail so it could be useful to other people.

First off is the solution to my previous problem. After trying to solve it by looking by the shader without success, I tried to go to the other end of the problem by examining the render target setup. After looking it over a bit, it became clear that the problem lay in that part of the code (I was not paying enough attention when I first cloned the code here, and missed that the parameters were not in the order I expected them to be). The biggest problem to overcome then was the difficulty of debugging without any dedicated tools and semi-good error messages. Something I want to say here though is that it is extremely important to pay attention to Visual Studio’s output window. I was paying more attention to the editor output but since that logs stops updated as soon as you break in the IDE, it is not as reliable. I am also not convinced that all error messages are even pushed to the editor.
Another thing that was extremely confusing was that I when I was trying to get additional debug output by using the -d3ddebug flag in my build command line settings, I didn’t really get anything from this. It actually made it worse since it throws on a larger set of errors and sometimes it throws for no apparent reason at all (I’m sure I don’t just understand the reason, but still I think it is worth mentioning). One notable example of this is that if you have the flag active and try to visualize render targets using the Vis command (Vis N) it will throw for some render targets. I never investigated the pattern although I got the feeling that it threw more consistently for RT’s marked as RT in the Vis list, which included my render target. At one point I was sure I had gotten my code to work (after I fixed the RT setup bug and some other small stuff) but it was still throwing when I tried to visualize it. When I started going through all permutations of flags I finally discovered that it worked fine without the flag! First I thought I must have made some error (like I wasn’t respecting a warning or something) but testing it for other render targets in the list (like some of the other FloatRGBA targets, like the translucency ones) yielded the same type of IDE break. So I’m left to assume that it is simply a work in progress or a bug.

After I had gotten it to work in Vis and I started working on getting it to work from the SceneTexture: semantic, I realized that I had missed editing the common shaders that the editor uses, mainly MaterialTemplate.usf and DeferredShadingCommon.usf. I’ve mentioned this before, but the combobox visualizer in the editor actually just using normal UE4 materials stored in the Engine/Content/BufferVisualization/ folder. Moving these to your project’s content folder lets you edit them like normal. This also of course means that these are compiled on the fly by UE4 (the editor generates HLSL code from the node-graphs every time you change something). Leveraging this knowledge gave me a hint on how to change these other shaders (most notably finding the link between the enum in MaterialExpressionSceneTexture.h and MaterialTemplate.usf. For users curious about this system, I can warmly recommend HLSLMaterialTranslator.h as an excellent starting point.

Something else that is worth noting about the shader system in general is how it makes its SRV bindings. It seems like it only makes explicit bindings the first time you recompile them, if you have any custom parameters in your shader declaration you must override your serialize function and serialize those initialized properties for it to work on subsequent editor launches. I discovered this when I wired up the PrimitiveComponent system with my custom render pass, and it took me by surprise, so that’s why I thought I’d mention it.

So, what is this good for you might ask? Lots of things, I could answer!
One usage example is what I talked about in the start of the thread; as input into a post process outline shader. With the stock outline shaders in the engine, you can only practically render one outline color for all objects you want to render. But using this additional render target, I am able to use the “tag input” as the color to use in the emissive output, like so:


As you can see, I am using three tag colors; red, green and blue. The custom tag render target output looks like this:


Right now I’m using the depth buffer to do the edge detection as per the UE4 engine sample (you can find one in the large content demo project) but to be even more efficient, I could change this to a sobel operator algorithm on my tag buffer instead, and refrain from using the custom depth buffer altogether. I’m not sure what would be more efficient yet, as I could imagine the depth buffer edge detection agorithm to be a lot faster than a sobel operator, but meh, won’t know until I’ve measured it.

Another usage could be for image processing where you need to figure out which object intersects with some region on the screen. You could do the math in a compute shader with this as input and then just do a single raycast at the end to find the actual object from any pixel from the computation output.

I’m sure you guys could find some other uses as well :slight_smile:

Some ideas for improvement could be:
I am currently using a mesh material to accomplish this, the problem with this though is that it is compiled for basically all vertex factory / material combinations, which turns out to be quite a lot. I thought about making it as a normal material, like for example the light shaders are, but it seems unlikely that this would work as I would still want to have the possibility of rendering any type of mesh to the buffer, and I think the only real way to do this is with the mesh materials. Looking at the light shader, I get the impression that making your shaders with that base class only lets you apply it to rendering basic shapes such as cones, spheres, etc.

Implementing the sobel operator might be interesting and testing relative performance.

I would really like to try using a lower fidelity target as I’m using FloatRGBA right now, which I think is even 64-bit. I think I could get away with 16-bit, or even lower depending on what you want to use it for. I’m unsure if it is a good idea to lower the target resolution (if it is even possible for a main pass target like this) as this might screw up the post process sampling. If I remember correctly, pixel size in the post processing stage is not the same as a “normal” pixel’s size. So as long as the resolution is the same I assume the size change is the same independent of the target. My fear is that this will no longer be true if you change the resolution of only my tag buffer.

Another thing that might be cool is to lend some permanence to this. Either as a fork in some public place, or as a pull request if it is deemed useful by the community, or maybe even if I can get it to work as a (I think this is unlikely as I needed to change so much code in so many places). Maybe it might be worth trying to formalize this, i.e. making it easier to add additional custom targets by UI, although this also seems rather unreasonably hard.

Anyways, I would love any feedback anyone has on this.
Do you feel like this is useful to you guys as well?
What did you think of the format of this thread?
Would anyone be interested in more odesseys like this?
What could I do differently to make this more useful?
Do threads like this fit into this forum, or should I just blog this instead, or do it on the wiki?
My hope was that this could be an organic process with feedback loops and collaboration, which is why I put it on the forum. Does this sound appealing to you guys?

Finally I would like to provide some feedback to Epic:
I thoroughly enjoyed reading your code. I think this is by far the most amazing open source project I have ever been involved in, and I want to thank you for providing me with an opportunity to dive into the code base of a triple A game engine.
I have been programming for roughly 20 years, games and other things. I started out with Basic, but soon moved on to C++, but about 10 years ago I got really invested in C# and the .net framework and my native coding has been relegated to C++/CLI for all those years. Your project is what motivated me to pick up C++ again and I can only thank you for this too. I had almost forgotten how much fun native languages can be, and I am extremely impressed of what you have done with the language.
Slate is probably the most impressive instance of your language wizardry, but I am convinced everyone who have touched your base class library are in as much awe as I am. You have managed to make using templated types almost as easy as they are in .net and far from the messes that STL and boost are, but you have still retained most of the power present in those frameworks. Another awesome point that everyone seems to make, but I feel I have to mention it as well is the magic that is the UObject meta-data and macro system coupled with the Unreal Header Tool (and blueprint and everything that hooks into that). Wow.

The only complaints I have are regarding documentation and code encapsulation. The documentation problem is a self-resolving problem however giving your existing user base and community, and making UE4 open source was the cherry upon that cake that will make sure that documentation and samples will always be in abundance.
The code encapsulation problem is a bit bigger though. To make this engine change that I would regard as a medium difficulty task, time wise took me a bit more than I expected, but I have to admit that I was surprised at how many different systems I had to modify to make this work. In total I had to create and modify roughly 40 different files. Most of these changes are simply registrations in different classes. I would have been a lot happier if these registrations had been made by a more observer-like approach where my new system can get references to all systems that I need to interact with and do those registrations / notifications from my new object instead of having to go to all of those systems and add information about my new object in those different places. I’m sure that this design decision is rooted in optimization concerns, doing it like this is probably a lot faster.
I’m not claiming that I know much about triple A engine design, I just wanted to voice my feedback that I thought it might have been a bit more spread out than I had hoped it would be.

Anyways, I had super-fun doing this, and you can find all my code in the fork info earlier in the thread.

See you next time!

Best regards,

Really amazing documentation ! Thank you . I am sure I will need to use this effect in my game, so this will help me a lot. I am just wondering if it is possible to somehow isolate changed files and make standalone from that ?
I really would like to see this in official build too, so hopefully, epic will check your work :slight_smile:

Hello Nonder!

As I mentioned, my ultimate goal is to create a or make a pull request. The problem with making a is that I am unsure if it is possible to access all low level systems I need to from a level to make this work. The problem I am facing with the pull request is that this change (because I am using a MeshMaterial) creates another 1000 shaders or so that need to be compiled when first launching the editor. This extra amount of compile time + space is not insignificant, so until I find some way to tackle this, I think that Epic would be reluctant to include it into further releases by default unless people find it useful enough :slight_smile:
I will create a pull request in due time nevertheless, but I don’t think it will be accepted until I can find a solution to this problem.

The easiest way to access the changes is to just pull and build my fork. If you choose this route, I would recommend using the 4.4 version, just remember to get the new dependency packages + the one Epic forgot for that version (you can find that in this thread: Compile error with latest 4.4 branch from github - Programming & Scripting - Epic Developer Community Forums )
If you do not want to run off my fork, you can create a patch from the commits from the 4.4 root from my branch and then apply that patch to your 4.4 base with:

git diff e29f8810134015a5e1486fde6d25bf61b0a5df5d 98b829a2ddf53827e8e53138f7899e87dd070793 – > CustomTagBuffer.patch
and then apply that to your repo using (You need to be on some commit after 4.4 for this to work without a lot of merging):
git am --signoff CustomTagBuffer.patch

I think the easier route is to just clone my fork unless you have changes of your own :slight_smile:
I’ll come back and post an update on what happens with the pull requst~

Best regards,

Thank you for fast reply. I hope they will check it and approve it or at least they will think about some other solution. I really like this kind of projects :slight_smile:

Wow, crazy amount of work you’ve done here .

You should make us a forward rendering path for water and stuff :stuck_out_tongue:


All suggestions for additional odesseys are welcome :slight_smile: I haven’t really decided on what to do yet, but another rendering thing is definitely a possibility. I am currently leaning to do something to do with compute shaders, just because I love the amazing stuff you can do with a few simple mathematics formulas and a massively powerful renderer like UE4’s.

We’ll have to wait and see :smiley:


I thought I’d mention that I managed to get renderdoc to work as a *.usf and renderer debugger!
The creator of the software actually messaged me and helped me sort out why I was having problems, so incredible amounts of Kudos to him!
You should all definitely check out his as the software is now open source too :smiley:


Anyways, I made a small tutorial on the wiki if someone other than me is struggling with this.

Here is the link:

I hope this will be useful to someone :slight_smile:

Best regards,

  • 1 for RenderDoc, I have been using it for a while now to debug USF and rendering in UE4

Thanks, ! I have done similar work (using off screen render targets and custom passes) for trying some algorithms on translucency and found several parts of your work process help mine. If you want to know more I am trying to blog about it here:


Thanks again,

Your link is broken. Here, I fix! :smiley: http://smartpawn.postach.io/real-or-unreal-refraction

1 Like

Thanks! I am about to finish the second part http://smartpawn.postach.io/real-or-unreal-refraction-part-2. Any comments or suggestions are welcome.

Very cool !

I’m going to play around with that later :smiley:
I’ve updated the code to work with 4.5 now: https://.com//UnrealEngine/tree/4.5CustomTagBuffer

Best regards,