Fog of War with CustomDepth & PostProcessing

Hi guys,

i managed to create a fog of war effect by adding a circular mesh with custom depth above my Pawns and a PostProcessing Material that tints everything that doesn`t have a custom depth in black. But now i have a hard transition from the visible area to the area where fog of war is drawn, what i would really like to have is a soft transition/blending. Is this even possible with my approach? If yes, how would i do it?

Thanks

If I understand right what you did is you got a circle surrounding the player that renders in custom depth, and you then use that to mask out the part of the screen where you can see the level as usual, with the rest being black. Right?

Blurring in a material is doable, but usually requires quite a few offsets and operations and is unlikely to look perfect. If you really want to go that way look up some blurring material setups from UDK (same as UE4 in this regard). I would personally consider changing your approach though.

An alternative way would be to use a Spheremask. That one has a build in softness factor and also creates a circle. So map the position of that circle to the position of the player, and you should also get it to work. Your main challenge would be in how to map it to the post processing material.

Your options are:

-Giant plane covering the level, only becomes translucent where spheremask is located (this would not even use post process material)
-You make a material function and map this spheremask on all materials in your game. You will need to use a blueprint script to continuously pass on the position of the player to the Spheremask using a material collection in this case. This may be the best way to go. This too would not use the post process material.
-If the player is always in the center of the screen, it is easy also. You’d just need to spheremask the center of the screen and you’d be set. This method would still use the post process blendable material setup. Alternatively you could calculate the conversion of where the player is on the screen vs center of the camera and then move the spheremask accordingly. That would allow you to move the player away from the center of the screen.

Unfortunately I have no tutorial for this nor have I ever seen anyone do this, but it is fully doable. I’ve done something similar in the past.

Hi,

Thanks a lot for your feedback.
Yes thats correct, right now im drawing a circle around the player in custom depth and use that as a mask. Ultimately, i want to have this working line of sight based (no sphere anymore) and for multiple Pawns (like in RTS-Games), so i think a Spheremask isnt the way to go for me here.

I´ve just learned about RenderTargets, so i think my best option might be to create a RenderTexture in c++, draw my FogOfWar-Mask with Canvas and use that mask for a postprocess effect (using SmoothCurve to get a soft transition).

Would that work or are there any problems i don`t see right now with this approach?

This is something you can do if your game plays out on a traditional ‘two dimensional’ world (i.e no overlapping terrain in the Z-axis). You can use a render target where each pixel represents an area (likely square) of predetermined in world size, giving you a grid you can control in code (protip - this grid can also be used by your AI, and this texture can also be used to store data used by your AI as you only need one or two channels to do your masking). Each pixel in this texture can be transformed in the shader to represent a circle to help you get a more rounded fall-off.

the bad part about this whole thing is how to represent the fog of war in the 3d world. at work (UE3) I had to implement this and tried every approach I could think of (rendering a plane, using a giant decal, using a lightfunction, adding it to every material in the game…), and none of the solutions were acceptable in terms of looks and performance.

in the end I was able to implement it via a postprocess. with it I was able to draw a (dynamically updated) texture into 3d space as if it was a top-down projection.
the tricky part was getting the texture to project in world-space but it was achievable by doing a couple of maths involving the depth and invviewprojmatrix. however there’s no Custom hlsl node in UE4 yet :frowning:

Hi, I’m trying to achieve something similar. I was wondering if you could share how you got this to work. Getting custom depth to render on a RenderTextureTarget2D does not seem to work for me if i use it as a blendable.
Also could you show your method on how you made the texture project in world space?

You might need to create the RenderTexture in c++ and draw on it using the canvas functions as explained here https://answers.unrealengine.com/questions/1558/ue4-scripted-texture.html?sort=newest

It’s been there since day one, and you also have the entire usf source code?

Use planar projection mapping. It’s a cheap and well documented technique.

oh, I wasn’t aware!

this planar projection mapping, couldn’t find any info about it. I guess you’re meaning decals?
I heard decals are much cheaper in UE4 but mapping a decal on top of a huge level for a top-down game in UE3 was expensive (because it was essentially duplicating all geometry it was mapped onto). I wouldn’t know if a decal is faster than this postprocess method in UE4

here’s the code that goes in the Custom node that I used in UE3:


// Reconstruct the world-space position of this pixel
float NearPlane = -MinZ_MaxZRatio[0];
float usedZPrecision = 1.0 - 0.001;
float4 ProjectedPosition = float4(ScreenPosition.xy * Depth / ScreenPosition.w, usedZPrecision * Depth + (-NearPlane * usedZPrecision), Depth);
float4 ViewRelativeWorldPosition = MulMatrix(InvViewProjectionMatrix, ProjectedPosition);
float3 WorldPosition = ViewRelativeWorldPosition.xyz + CameraWorldPos.xyz;
return WorldPosition;

ScreenPosition and Depth are inputs. in ScreenPosition you plug a ScreenPos node (screenalign turned off), and in Depth you plug a SceneDepth node
if you then use a ComponentMask with R and G, you can put it as a texture’s UV input and that projects it into the world from the top-down

if you get it to work in UE4, please share :slight_smile:

Rather than use SceneDepth, in UE4 you could possibly make direct use of the depth buffer.

No, it’s where you project a texture onto surfaces in world space rather than UV space. Triplanar mapping is pretty common in terrain shaders as it’s a pretty cheap way to avoid the more obvious texture stretching issues. UE4 actually has a node that does planar mapping through a given axis for you, though I forget it’s name D:

Decals are deferred, so they’re dirt cheap so long as you keep them simple.

also easier if we have access to the world position buffer directly in a postprocess. this was far from possible in UE3 which is why such a reconstruction was needed. I haven’t used UE4 nearly as much, but it would be nice if this time around we have the SceneTexture equivalent of all the deferred rendering buffers exposed in the material editor

this sounds like UDK’s WorldAlignedTexture Material Function. but this is per-material, which means all materials in the game should include it if you want such an effect for the whole scene. sure you can use the new material collection feature thingy, but doing it via a postprocess is more versatile in this case

I don’t understand the implications of the word ‘deferred’ in the context of decals, I just know they are much faster than UDK’s :slight_smile:
still fog of war is about projecting a texture all over your scene. It’s not something I will implement in UE4 in the future if at all, but frankly I cannot predict which solution would be faster

You’d need it all your materials, yes, but realistically in an RTS style game, the vast majority of the game world is going to be a landscape with a smaller number of environment assets (which if you’re using a planar mapping based solution can actually use the same material anyway). You’d likely handle gameplay object separately, I’d look at desaturating them at their last known positions, or otherwise simply not rendering them.

You do - add a SceneTexture node and you can choose which texture you want.

working on a strategy game I can tell you that the amount of mesh assets used isn’t just a few. constantly watching everything to make sure it works with the fog of war… as the person that implemented the fog of war visuals I’m glad I don’t have to act as vigilante :slight_smile:

hiding objects that are out of the fog of war is needed for optimization but doesn’t change the fact that all assets need to look good in and out of the fog of war, and when the fog of war’s gradient is directly into them.
another part of the problem is that you’d need to find all objects via code and inject them the fog of war texture as a texture param (or do it at postbeginplay for dynamic objects), which again in UE4 you can do it via a material param collection. but if the game has a complexity like multiple areas with different fogs of war, things can become much harder to maintain

oh cool!
this is what we call the Troll node in UE3 at work, as SceneTexture in UE3 has a dropdown with the single option Scene_Lit and nothing else :slight_smile:

Having material function that does the masking and in code an interface that pushes units into your list for use in a render target, and for determining enemy visibility is no problem at all - you can use an iterator to actor that use it (though much quicker than UE3, that’s still not recommended), or you can have a manager class to manually maintain it, like what you’ve mentioned. The render target approach is a strong one - you don’t have to ‘inject’ anything, you just draw the texture by iterating through the player owned units in the interface, determining their grid (render target pixel) positions, and ensuring the pixels in the channel you’re using for that set of information are the right colour. Any enemy units that are visible ‘on the grid’ would need to be rendered, an the material function combined with the render target does everything for you. If you’ve been setting up materials sensibly by having the one function, and one base material for each object type to instance (i.e how many units actually need unique materials?), then the material side of things is no work at all.