UE5 - Ideas for rendering realstic snow

Hi,
I would like to Render Dynamic Snow, By Dynamic Snow I mean it gets deformed at run-time by physics objects moving through it.
Before the UE5 I would’ve used a render target and by changing it at run-time, we can achieve the real-time effects desired. After that, we can render it via a tesselation layer in the landscape’s Material.
But now with the introduction of the nanite system, Epic deprecated tesselation starting at version 4.26. Now I guess the standard way that Epic wants us to do these sorts of things is via Heightfield Meshes and Virtual Textures, I’ve tried to achieve this by having a Material draw to our Virtual Texture, And by not clearing this Virtual Texture you can get a trail of that shape when it moves. But by doing that you would get some weird artifacts that I would only assume are because of tiling and mipmaps or the dynamic resolution of the texture getting rendered on the object.


Here is my original post where I’m issuing the problem with Virtual Textures

I would like to get any suggestions on the ways of doing these sorts of effects in Unreal Engine 5.

First of tessellation has nothing to do with this. You could always just use a mesh with a high tris count with the same outcome.

Second, use a heighfield mesh.

Thrid…
Make your own vector field based system.
Depending on resolution requirements and a few other things you can get high detail trails to persiste for several minutes of gameplay. Lower detail stuff (where the texture is mapped to the full size of a landcape, so around 2 or 4 pixel per meter) you can have the effect be persistent.

A good system for this would require some tinkering, particualry for 5.1 since so much of the engine is just broken - including render targets apparently.

Go have a look at render targets and how to move/use a vectorfield texture in worldspace.
The start is very simple - getting something that works nicely is not.

@MostHost_LA Hey, Thanks for the tips. I’ve made some progress using those tips!

I’ve started using Scene Capture Component 2D and rendering an ortho view from the bottom up of the scene but I still need some ideas for optimizations, Right now I’m rendering the scene with a 16384 (2^14) Ortho Width, And writing it onto a 2048 RenderTarget texture. For the prototyping it’s enough, I can sample it and do some FXAA at runtime to get better results. But I’m looking for a way to make this texture streamable so I can have a much larger texture and load it one part at a time, Does the Runtime Virtual Textures do that by default? If not how can I achieve something like this? Am I in the right direction?

On the other hand, For the displacement, I should use Heightfield Mesh which is in the Experimental state. Do you think it’s wise to use it for production? Or should I start using 4.25 for production till UE5 gets more stable(and use tesselation for displacement)? I don’t feel that good about UE5 right now, I’ve encountered some crashes and bugs here and there and it makes me wonder if the engine is production ready now after all of these toolchain changes like nanite and lumen happening in a short time span.

I haven’t used Unreal since UDK so I’m not sure how fast Epic would address these problems and roll out an update like 5.2 or something with more of these experimental features getting released for production use.

Render top down. Don’t render objects. make Niagara emitters that fade over time and attach them to whatever needs to leave trails. So you can also control the shape they leave.
Shoot for a render cost of no more than 2ms on the render target. that’s actually even too much.
Strip all the options so it literally only renders the niagara particle effects or maybe stuff that you tag with a specific gameplay tag or similar.

That’s a performance issue too. Too many things in such a large area. Shrink it to match the size of the texture you need at the resolution you want.
Ei: 2560xp^2 is good at about 50 to 100m all around.

If you want to code in C++ you can actually make render targets output into VTs. that would get you a near infinite texture size because of the way the VTs work…
Ofc, actually doing this isn’t easy.
Not really sure what stream-able considerations should be.

No idea really. My guess is no. But there is a possibility it can…

Consider this. if you drop emitters like stated above, everyone everywhere can capture the same emitters and re-construct the texture via local capture. Eliminating the net requirements.

So, the thing is, the Epic team is worse than an elephant in a china shop. Even on many official releases they blunder basic things of things which AREN’T beta.
For instance, landscapes were without the possibility of changing the physical material for about 6 months in 2021.
If you want to develop with Unreal, my advice is build from source. Apply manual fixes IF you think the epic team came up with something worth-wile. Otherwise leave the project in the custom built engine version and keep working at it without “downgrades” which is what updates do 90% of the time.

.25 is before the rendering pipeline got messed up.
You should see about 20% more FPS. You’d miss out on a couple new rendering features. Everything else is more-or-less equal.
Meaning all the legacy bugs are ever-present :stuck_out_tongue:

Definitely don’t. Make yourself a properly LODing mesh for the snow.
That way if you ever want to move the project it just does.

Definitely not. But the epic team must be facing Egg-Head pressure to release… From the same folks who bring you the boneheaded forum updates disabling core functionality :stuck_out_tongue:

Let’s be 100% honest here. IF you need something, do it yourself; Epic won’t. Ever.

They aren’t even trying anymore. Most of the issues in the bug tracker get marked with a “won’t fix” which is the equivalent of FU to anyone reporting the problem, and imho to the poor guy doing the triage too.
Most issues of UDK have remained constant since UDK. I think one of them would be the lack of shadows in ortho captures, for instance.

IT is to the point that Nvidia stepped in and fixed the rendering issues for ray-trace in custom builds, which I don’t think the epic team has even bothered to bring back into the main branch (though I could be wrong on that).

My 2c. It’s better to use CryEngine and do everything yourself. At least their team fixes core stuff and communicates properly.
But I do also use Unreal, it does have some advantages, like not having to code your own render-target system.

Oh, and I wrote this before reading that the same eggheads just cost epic 520 million over overtly violating privacy laws…

1 Like

Thanks It was one of the best answers I could’ve gotten on this thread.
I’m going to try out the Niagara solution, I guess it can be the most efficient way of having persistence prints without implementing it via C++. Because of the artifacts for achieving persistence RenderTargets, I have to do it twice per frame which is not the best thing if you want to do it for large textures.
Otherwise, I would use VTs and I guess with the use of C++ I can skip the second rerender altogether.

This actually can be a good thing for us, We are going for more of a stylized look so missing nanite doesn’t feel like such a loss. But having a good global illumination system is always a win. From there I can cherry-pick some of the changes that we would really kill for :grin:

CryEngine is one of the best choices out there, But unreal can be more artist-friendly for small teams and also comes with the advantage of having access to the full MegaScans library which is right now such a cliche thing to say but It is true, With good shading, you can even get some good stylized output from them.

MetaHumans too, if you want to only have problems with your projects.
Both solutions are aimed at low end cinematic. Neither of which is helpful for most game projects.
Additionally, you should still be able to pay for the subscription to megascans to use them anywhere else.

Unity comes with the ability to implement paid Havoc at a fraction of the cost it would take to implement it into anything else.
Now includes SpeedTree, which needs an update for UE5 over the changes to Object Pivot and Nanite.

… Sure, with cryengine you get just a great rendering pipeline with an amazing voxel based raytracing solution which looks great and works off a 1080 without much of an issue. BUT, you don’t get free tools…

To be honest, I’m more upset about Zbrush being bought off, and Substance being bought by a company that regards user experience and feedback worse than Epic ever has (and that’s saying a lot!).
The tools we use are actually more Vital than the engine itself… If we want to go offtopic on it :stuck_out_tongue:

This:
You may have to implement GPULightmass, or work with LPV and mesh distance fields.
Epic has never been able to provide a “good” global illumination.
Most effects are screen space based.
Shadows are pretty much just distance field based or cascaded shadow maps.

.25 with raytraced shadows MAY be a good alternative. You’d have to bench a target hardware to know if it’s viable.