MeshBlend - A plugin for blending meshes, landscapes, etc in Unreal

Hi!
I’m working on a plugin for Unreal that enables you to blend meshes. It’s getting pretty good now, but still some time until a marketplace release.

The post process material shader creates the blend effect in screen space, meaning there is no “resolution” like with RVT. I also have a quite complex actor that “activates” each mesh so that I get the info need about what pixel is what mesh, and what size it should use to blend. The plugin currently has 4 sizes with adjustable values.

Hope you like it! Feedback is appreciated! :blush:

2 Likes

This is of interest to Me. Can we see a video?

Here is a short video of the effect.

2 Likes

Test scene with two overlapping landscapes (rock and sand). plus some rocks etc.

Some composition tests underwater.

1 Like

Test rendering with sequencer.
The plugin can be used for cinematics :clapper: :star_struck:

1 Like

This looks fantastic, I’m really interested in this!

It’s hard to tell with the web jpeg compression, but it looks like you’re doing some clever mirroring to blend across the intersection seams? It looks good and does a really nice job of maintaining the same apparent detail of realistic noisy assets.

I know you’re still developing it, but I had some questions if you don’t mind:

  1. Are there any artifacts when a foreground object (like a character) moves in front of a blended seam and obstructs the pixels behind?

  2. I’m assuming you’re using depth to blend and mask and effect, are there any artifacts if a transparent object or effect, like glass or smoke, passes in front of a seam?

  3. What’s the performance like, or how many ms is the post process adding to your test scenes?

  4. Do the screenspace blend sizes scale based on depth so they don’t appear to change width as the camera moves near or far? I’m guessing this is what the “Size” and “MinSize” values are for, with “Size” being the largest a blend can be up close while “MinSize” is for objects that are farther from the camera?

  5. You mentioned a complex actor needs to process each mesh, how heavy or slow is this process? Is it processing at runtime for actors within view or is it a pre-process baking step that needs to be run beforehand? I’m wondering about performance or build times for large levels or open worlds in a full production.

This looks great so far, and I’m really excited to see it progress!

1 Like

Will MeshBlend work on static / skeletal meshes attached to static / skeletal meshes? This will be awesome for entity customization.

The basic effect is mirroring across the seams. All the stuff that makes it look good is the hard part. :sweat_smile:

  • Are there any artifacts when a foreground object (like a character) moves in front of a blended seam and obstructs the pixels behind?
    As with all screen space effect it’s not perfect. But the blend fades when occluded by a foreground character. It’s subtle, but there. A bit like other screen spaced effects like ss reflections or ao/gi. You don’t really notice it on smaller stuff like grass, props etc

  • I’m assuming you’re using depth to blend and mask and effect, are there any artifacts if a transparent object or effect, like glass or smoke, passes in front of a seam?
    The effect happens before transluceny. So water/glass/etc works fine.

  • What’s the performance like, or how many ms is the post process adding to your test scenes?
    Currently sits at around ~0.45 ms for a 2560x1400 resolution.

  • Do the screenspace blend sizes scale based on depth so they don’t appear to change width as the camera moves near or far? I’m guessing this is what the “Size” and “MinSize” values are for, with “Size” being the largest a blend can be up close while “MinSize” is for objects that are farther from the camera?
    The size scales on depth * screen size * FOV. So the size is consistent regardless of render resolution or fov.
    MinSize is an artistic setting to keep the size going for x distance. At 2 it will keep the same size in screenspace 2x further away from it. Helps make small pebbles slightly blurry up close, but still a bit blurry at distance.

  • You mentioned a complex actor needs to process each mesh, how heavy or slow is this process? Is it processing at runtime for actors within view or is it a pre-process baking step that needs to be run beforehand? I’m wondering about performance or build times for large levels or open worlds in a full production.
    The activator is runtime, and works on a processing budget. The current default budget is set to 0.3 ms.

I’ve partnered with a game company that is using it. So it’s getting battle tested on a real open world AA game with stupid amounts of actors. Overall I feel the plugin is coming along nicely both in terms of performance and fidelity! :blush:

1 Like

It currently works with any static mesh in the scene. Landscape -> Mesh or Mesh -> Mesh etc.

I haven’t added support for skeletal meshes yet. But will later.

1 Like

Im very excited for this plugin. I do a lot of procedural run-time Mesh Kitbashing with Characters, Creatures, Weapons, Props. There are hard seams and creases where the meshes intersect so I don’t get that “organic” singular mesh look. Additionally, mesh pieces/parts could be easier to manage if merged into a single mesh at runtime.

Cool creations you have! :star_struck:

I had to do a little test to see how it would look. :sweat_smile:

2 Likes

That’s Amazing. I did not anticipate the meshes to move and blend in real-time. This suggests the meshes are not merged together and thats awesome. This will allow for more flexibility as also have to consider (unblending?) for deconstruction/destruction in my games.

Cool creations you have!

Thank You so Much, I’m currently only testing with one layer, but more elaborate multi-layer assemblies are planned.

I added a setting to control how much a size will bleed over on a larger size.
So rocks (medium) on the ground (large), the sand gets less of the rock blended onto it.

1 Like

This is the real game changer! I was developing something like this a decade ago for my own webgl engine, but this is superb looking, makes everything looks so soft and natural. Is this postprocess searching for edge overlaps or intersections with blending kernels ? or do you need to do some materials , merging etc ? What about performance ? Things like this can be done but are usually heavy processed. For movie or cinematics rendering it does not matter, but while gaming , i prefer fidelity over little details.

The effect is a post process that blends based on the objects intersection. I add a single parameter into the material so that it can transmit the objects “id” at runtime. Otherwise I wouldn’t know where one object starts, and another ends in the post process.

It’s designed and optimized for gaming, but i have cinematic version also. Uses about ~0.50 ms at 2560x1440

Been working on a little playable showcase demo of the plugin.

1 Like

Showcase level is progressing :blush:
Trying to fill it with example use cases.

3 Likes

This is very cool and something I would defo buy. =)

1 Like