[Updated for 4.20.1] Let it rain! (But not indoors!)

EDIT: This post is outdated. An updated 4.20 version together with download links can be found below.

Heya folks,

For a while now I’ve been having some issues with my rain setup - I needed a simple way to mask it out in my interiors, under bridges etc. SceneDepth particle collision was out of the question as it depends on the collision object being rendered (i.e. if I am not looking at the ceiling the rain is falling through). Normal collision is out of the question for performance reasons. So after trying out a few different solutions I finally managed to put together an extremely effective and efficient method that works on all types of “ceilings” no matter how complex (like tree branches) with virtually no overhead!

Special thanks to Daekesh and the rest of the #unrealengine people who helped me out with this.

Since this is not a “basic rain tutorial” I am going to assume you know the basics of Cascade and are able to toss together a simple preview rain particle system on your own.

So, without further ado, slap together a blueprint that looks like this:

The notable things here are the following:

The particle system has a boundary of 1000x1000 (Starting positions of particles go from -500 to 500 in X and Y)
The PlayerPawnOffset is (0, 0, 3000). We use this to position the rain above the player each Tick.

Now, the main part of this system is a SceneCapture2D actor that hovers high above the player and the rain. It faces downward and captures the scene all the time into a 256x256 RenderTarget. The SceneCapture2D actor has to be high up to negate the fact that it uses a perspective projection (SceneCapture2D can’t do orthographic yet). It is important to note that this is using a SceneCapture2D ACTOR, not a component. The reason for this is again a very silly shortcoming of the SceneCapture2D component - the actor has the ability to filter out objects, lights, shadows etc. while the component doesn’t.

Thus, we put our actor in our scene, reference it in our Blueprint and keep the actor at the target elevation at all times, regardless of the position of the rain blueprint. It is also facing straight down. Like this:

We do this in the construction script and every Tick of our Blueprint.

Now as for our pretty little SceneCapture2D actor, those filters I mentioned look like this:

Note: AntiAliasing should be turned off, my bad. It is also advisable to enter a sensible max view distance. You don’t want it capturing your skybox.

The way this is going to work is the following - our scene capture actor “records” our scene below and uses a postprocess material to get a depth display out of it. Our rain material then takes that render target, samples it at the point where the particle’s position is and checks if its own height is below the height of the depth mask. If it is, it isn’t being rendered and it can set its opacity to 0. The tricky part is to get the scene capture actor to record just the area that the rain is covering. Since you know the extends of your rain particle system you can just adjust the FOV of the scene capture actor until you get it right. Since we’re recording from so far up (30k units in my case) the parspective error is negligible.

In order for post process to work on scene capture actors, the Capture Source needs to be set to LDR. In the list of blendables near the bottom we add the following post process material:

The material itself does exactly what I described above - takes the position of the captured pixel and colors it anywhere between black and white based on its Z position. The important part is that its Domain is set to PostProcess and Blendable Location is set to Before Translucency.

The DynamicRainRange material function reads the player position and maps the aforementioned pixel Z position in the range [PlayerZ-3000, PlayerZ+3000]. The reason for this is because it allows the whole system to use a smaller Black-White range, thus avoiding losing precision.

The InvertSRGB is a weird beast. Unreal Engine’s Post Process Tone Mapper does some color fiddling which needs to be reversed, otherwise all the values are brightened, leading to incorrect results.

Once we have all that set up, our scene capture actor will happily spit out a nice greyscale image of our environment. Now we can use that to mask out the rain! All that is needed for that is this simple material on the particle emitter:

Now, there are a few things going on here. Most notably, we’re sampling our render target (The Texture Sample in the middle). To do that we need to know where the current pixel stands in relation to the rain emitter’s origin (and thus, in relation to the scene capture origin). We can get the origin with the ObjectPosition node. Since we only do this to sample the texture we don’t need the Z coordinate, so we only mask the RG channels and subtract them. Now we effectively got a vector from the middle of emitter (the origin) to the current pixel.

This is wrong though as textures are samples from the top-left corner. To offset this we just add (0.5, 0.5) to our local pixel position. As you can see in the material I had to rotate the coordinate to get the proper pixel from the texture sample. You can rotate the scene capture actor, the result will be the same in any case.

Now we got our scene depth value at the position of the current particle pixel! All we need to do now is check if that pixel’s depth is below the scene depth. If it is, we set the opacity to 0, since we’re inside. Otherwise, we set it to 0.5 (or any other value really).

Plug this in and voila! Your rain will follow your player but it won’t drop into houses or below bridges. The only limitation is that your ceiling is lower than the Scene Capture actor. If you have towering structures you might need to position that actor further up (you will also need to adjust its FOV value to some ridiculously small number most likely).

Hope that this was at least somewhat clear! If you have any questions let 'em rip below!

Thanks to ZorbaBeta for the original shadow-volume-esque idea.

thanks you:)

I have since adapted the system to work with angled rain as well. If anyone is interested I can amend my post. :slight_smile:

Note that the platform below is actually spinning so it works with dynamic objects just as well!

Looks amazing. Great work!

Thank you so much for sharing! :slight_smile:

Would definitely be interested in seeing your solution for angled rain, yeah. A lot of people around here could find it useful.

I’ll type it out as soon as I am in the office on Monday. :smiley:

Awesome share! Cheers :slight_smile:

Definitely interested in the angled rain solution - awesome work!

hi,

Thanks for your tutoral that’s work good.

What if you don’t have a roof but just a one-sided ceiling?

@ - Hey what happen to the write up on angled rain?

Is this still the best solution now in 2018, or is there a better way to handle this now?

Hey - I’ve never found time to do the angled writeup and I lost that code since in a hard drive failure. There are several solutions these days that are better than this although this is still a viable solution in theory, but as far as practical implementation goes, many techniques used here have more practical and streamlined approaches. For example, scene capture can now capture orthographic projection, as well as capture depth directly so no conversion or effects are needed for those steps. I will need this soon in a project so eventually I will write up a new method for this, can’t really promise any timeframe for it.

Looking forward to your new write up on this. Will be needing a similar effect on a project in the future. =)

EDIT: I accidentally left the RT_RainDepth texture at 1024x1024. Please reduce that back down before using this. You will have to play around to get the size that suits you. I had very little artifacts with sizes as low as 64x64 but 128x128 seems to work almost flawlessly. Obviously smaller size = less precision and granularity, but faster performance.

Well, I went ahead and spent some time to re-create this code in 4.20.1. After a bit I’ve managed to get it working. I’m presenting the Blueprint “as is” for now and if anyone has any questions as to how it works let me know and I’ll explain to the best of my abilities. To get this to work just drop the RainOcclusion folder into your project’s Content folder and you should be set. Put a BP_AttachedRain actor into your level and it should work automatically. Right now the rain covers a range of 20 meters around the player and the process of expanding it requires to change the following things:

  1. The Orthographic Width on the SceneCaptureComponent in BP_AttachedRain
  2. The TextureSize parameter in MF_RainDepthOpacity
  3. The Initial Location and Boundary of the particle system

You can substitute the rain particle with any particle you want. The only important part of the whole setup (both particle and particle material) is the call to MF_RainDepthOpacity in the material. As long as you add that to your own rain you can put that into the BP instead of the placeholder one I’ve set up.

To give a brief overview on how it works - the SceneCaptureComponent captures the scene depth into a render target. The render target is set to R32f meaning it only has a red channel and it’s 32 bits long. This means that the depth stores the actual centimeter distance from the SceneCaptureComponent instead of a black-to-white mask. This makes the math in MF_RainDepthOpacity quite a bit simpler.

The first part is to calculate the “depth” of the current particle. In order to do this we can’t just do PixelPosition - ParticlePosition because that’s a diagonal line that is only accurate for the particles along the central axis. This means that we have to project the location we’re testing onto the particle’s central axis. That’s what this bit is about:


It now becomes trivial to check if this depth (i.e. the vector length) is smaller than the depth. Since the depth is already stored in centimeters we don’t have to do any conversions at all. The tricky part is getting which pixel of the depth texture we’re testing against, i.e. calculating the UV. The math I have for this is actually finicky and seems to produce wrong results at certain angles so if anyone has any idea how to improve it I’d love to hear it.

The gist of it is this:

  1. Take the previously projected vector and subtract it from the pixel position (i.e. moving it “up” to the origin plane of the rain).
  2. Transform it into local space, filter out only X and Y.
  3. Normalize this to be between -0.5 and 0.5 in both axis.
  4. Add 0.5 to offset it to 0-1

It works at what seems +/- 30 degrees Pitch and Roll which should be more than enough for all rain scenarios, but again if someone has a better way to do this math bit please share!

Good stuff. Thanks for sharing! We will be testing this in our project soon.

Oh wow, this is great. I just tried it out and works really well.

I moved the components over to my character blueprint so I don’t need to have a separate actor following the character around and it works fine. There is a number of rendering features that can be turned off on the scene capture as well to improve performance further. Pretty much everything except for bsp, meshes, landscape and foliage really.

I also set up a rain particle effect in Niagara instead of Cascade, since that is the way of the future and all works fine with that too. This looks better and runs a lot faster than using CPU particles with collision, which is what I was doing before.

The only downside of the GPU particles is you lose the trace collision that the CPU particles had, which I was using to then spawn little splash particles where the rain hits the ground. However, I’ve come up with an idea that might work, just not 100% sure how to do it. Perhaps you can offer some advice.

I’m thinking of creating a splash particle effect that just spawns splash mesh particles at random (in the x,y plane) around the player up to a certain radius and let the material control the z. So in the splash mesh particle material, I could perhaps sample the same scene render target and use it to set the z position to render the splash at in world space. That way the splash will always render where the rain stops (i.e. on the ground or roof, etc…) if that makes sense. Do you think this would work? Any pointers on how I could specify the z position of a pixel for a splash mesh particle from the scene render target?

There’s actually a way to simplify the splash even further. Make the splash a separate emitter on the rain particle that spawns identically to the rain but modify its material calculation to not just do “if below depth, set opacity to 0” but rather have it be 0 below depth as well as above “depth - SplashThickness”. That way you only get it a few centimetres above the surfaces. Of course this has the disadvantage of rendering effectively invisible particles all around you at all times.

Edit: Actually this can be optimized. Have an emitter that spawns splashes and move them down to whatever the depth is using world position offset. I will try hacking this together.

Thanks for the share…