Runtime Mesh Component

Out of curiosity, what are you trying to do? My project that’s driving this component is also sort of a terrain system.

The static mesh doesn’t have adjacency stored, it’s just transient information needed to calculate smooth normals (and actually tessellation as well). The SMC has all of the normals/tangents/tessellation information calculated at cook time in editor so it doesn’t have that information by runtime.

You do pose an interesting question though and it’s definitely possible to setup the normals/tangents calculator to use a stored adjacency. Could either do a separate helper to get the info, and you just feed that to the normal calculator each time you need to recalculate. About to start into tessellation and I know the adjacency is needed for that so might be useful for more than one thing. Actually if I’m going down that path I could even easily make it so that you could build the adjacency info yourself since for many things (like heightmap terrains which are a giant grid) it would be faster to generate that info yourself during mesh generation than to have a general purpose function find them afterwards. (I’m not saying only supporting you supplying the info, more that you have the option to if you can do it faster)

Once you have adjacency then calculating normal/tangents should be pretty quick. What size meshes are you using? Also, if it’s just a plane, why are you using SMC conversions?

Terrain, well, more less… Well, Actually, more than less…

This one is loaded from a save game. So, no, it’s not actual generation speed.

Right now I’m using 128x128 meshes, but I’d love to make it up to 256x256. It takes at least 7 seconds to build a 128x128 mesh with the system I’ve got now. BTW, I’m doing it in BP only as I can’t code in c++.
How I build it - maybe you know a better way.
1: I make a array of vectors - loop a loop to create a grid. I offset the quads to form a cube, the normalize and multiply the vectors to create a sphere.
I don’t use SMC, I create a mesh from scratch. IF though I could take a 256x256x static mesh and convert it to RMC I guess it would be faster. More, when you are importing a static mesh there’s a built adjacency checkbox. Isn’t thet “the thing” we are talking when we say normal/tangent cook? and, if not, so, when a static mesh is tessellated then it calculates that on the fly?

@Z-enzyme I think that checkbox just builds the adjacency indices needed for tessellation, which I don’t think I can easily use for the normal/tangents, so the SMC probably wouldn’t gain you anything. I should be able to split the normal/tangent calculation apart so that the 2 parts can be operated separately (initial adjacency calculation, or you create the info yourself) then the simpler calculator you just feed it the stored adjacency. that should help substantially in the cases where you need to recalculate normals/tangents for a specific topology that only changed positions.

You got an interesting project there btw! I don’t use blueprints much except for setting configuration and tiny little things. My system is full C++, so not sure what to tell you which making that faster in BP other than make sure to use Nativize if it works for you.

I’ve thought about trying to import real geo data into mine, but I’ve got nice enough generation running through noise that I haven’t feel like attempting it.

This is wonderful!

Can I replicate this from the server to clients? I have built a dodgy system using TCP to send voxel data to clients, but if I could just replicate the mesh and have the server handle all of the heavy lifting…

Yes you can. But you need to create your own replicate mechanism.

As @Korcan said, you can replicate the raw meshes but you’ll have to do it yourself for now. It’s a planned feature for the RMC but it will likely be a while before I touch it as to be honest it falls behind most of the other wanted features since I need several of the others for my own project.

Since you mentioned voxel directly… Replicating the meshes is a valid option, but depending on the type of voxel you’re working with, replicating the voxel data can frequently be smaller and therefor lower bandwidth due to the type of data being easy to compress. Think about it this way, each FRuntimeMeshVertexSimple is 32 bytes… Naive mesh generation of Minecraft like blocks needs 4 vertices (128 bytes) + 6 indices (4 bytes each so another 24 bytes) for each visible face whereas usually you can represent blocks in 1-2 bytes depending on what you’re doing. Smooth voxel is another story and depends heavily on the surface extraction algorithm used.

Ah, very true. I tried using Unreal’s compression and was able to compress a TArray containing 32^3 uint8s down from 32768 bytes to 58. I guess this also debunks my idea that having the server rely on RPC’s to the client to change blocks in an array.
I still need to think about how to “nicely” replicate the data to the clients though - Unreal’s replication system is still new to me.

Recently I tried PolyVox, but am not sure if I like using it with Unreal or not yet… this also adds more replications woes (for me anyways).

@: I just wanted to say thanks for building this plugin. It has helped me immensely. I was struggling to render a lot of simple meshes and update them quickly using UInstancedStaticMeshComponent, which is horrendously slow to update (in the order of whole seconds). I wasted a couple of days trying to solve the performance issues until I stumbled across this thread and thought I’d give it ago. Within a couple of hours I had the system fully swapped out to use your RuntimeMeshComponent and it now renders and updates 10,000 meshes in real time without even a hiccup on the FPS. I’m sitting on a solid 120 FPS, even with constant updates to the generated mesh. Thank you again so much. I really hope you have the time to continue to refine this amazing plugin.

The only thing I’ve had a problem with is SetMeshSectionVisible() doesn’t seem to work correctly. It works the first time I call it when creating the mesh, but then after that it doesn’t work any more. I hooked it up to a keyboard key so I could toggle the mesh section visible, but it doesn’t do anything. The mesh section just stays set to whatever visible setting it was set to when I first created it. Are you supposed to update the whole mesh section between calls to SetMeshSectionVisible() or something like that? I’ve resulted to just calling SetActorHiddenInGame() on the whole actor since I only have one mesh section anyway.

I wasn’t aware of a problem with SetMeshSectionVisible, will look into that. Turning the actor hidden works, but there’s a good/bad part to that depending how often you do it. When the actor is hidden it will remove the mesh from the GPU saving vram, but if you change that visibility too frequently then you’ll continually resend the mesh data to vram.

I still expect to push a pretty good size update within probably 2 weeks, so it’s definitely still being worked on! Glad to hear it was useful!

Great. Looking forward to the update. I have since been converting more of my UInstancedStaticMeshComponents to URuntimeMeshComponents and the performance gain is amazing.

The only hard part is that I have to hand build the meshes by laying out vertices and triangles in code instead of being able to import a static mesh and use it with UInstancedStaticMeshComponent. I’ve looked into possibly reading the vertex and triangle information from a static mesh and then duplicating it over and over again as needed in the URuntimeMeshComponent. However it appears as though that can only be done by importing specific PhysX libraries which I’d prefer to avoid doing. Its not the end of the world as my meshes are fairly simplistic.

However there is one problem I’ve come up against that I haven’t been able to find a work around for and I’m afraid URuntimeMeshComponent may not support it. When creating a regular static mesh in 3ds for example, you can specify a MaterialId for each face of the mesh. This information is then imported into UE4 (in the FBX file). So if your static mesh uses two MaterialId’s for example, then when you view that static mesh in the UE4 editor, it has two Material Slots. You can specify a material for each slot and the static mesh will apply those materials to its faces based on the MaterialId. This works great with UInstancedStaticMeshComponent and I’ve been using it extensively to texture faces of the mesh differently. However URuntimeMeshComponent doesn’t appear to support this. The FRuntimeMeshVertexSimple struct doesn’t have any option to pass in an array of int32 MaterialId’s (i.e. one for each face). Is this a known limitation with URuntimeMeshComponent or just something that hasn’t been thought of yet?

What I find interesting is that URuntimeMeshComponent has the following function, virtual void SetMaterial(int32 ElementIndex, UMaterialInterface InMaterial)*. Note that it takes in an int32 ElementIndex, which is the index of the material slot (i.e. MaterialId) to set the material for. So the base class that URuntimeMeshComponent derives from must support this feature??? So would it be possible to pass in an array of int32 MaterialId in the FRuntimeMeshVertexSimple struct and apply it to the base mesh so it knows which material slot to use for which face??? I really hope so! :o

OK, so I worked it out. For anyone else that comes along wondering how to do this, I eventually worked out that SetMaterial() actually defines the material for each mesh section. Therefore, if you want a runtime mesh with two materials, then you create the first mesh section and set its material in ElementIndex 0. Then create the second mesh section and set its material in ElementIndex 1.

@: I believe I’ve found some sort of bug to do with shadowing. If you create a runtime mesh in the construction script of an actor blueprint, then it casts shadows fine. However, if you create the runtime mesh in BeginPlay (or anywhere else for that matter), then it no longer casts any shadows. It will receive shows ok, but just not cast them any more. Below is my very basic blueprint showing what I’m doing. I’ve tested this same blueprint with a procedural mesh component instead and shadows work fine in both cases. It is just with the runtime mesh component that it won’t cast shadows if the mesh section is not created in the construction script.

Sorry about not getting back to you. Been swamped in class work, and just now really getting back into work on the RMC.

First, you are correct on how the materials/sections interact so yes just align the section id with the material id when you want multiple mesh sections with different materials.

Second, I will check into that bug, I was able to reproduce it so will look through that here soon.

@: I believe I’ve found another bug. It seems like the runtime mesh isn’t updating the nav mesh like a procedural mesh does. I created an actor with a runtime mesh using the same blueprint shown in the picture from my earlier post and then created a second actor using a procedural mesh, again using the same blueprint. I then added a nav mesh bounds volume to the level and set it to dynamically update. When you drop the actor on the level with the procedural mesh, every works correctly. The nav mesh updates and takes the procedural mesh into account. However when you drop the actor with the runtime mesh on the level, the nav mesh does not update at all. Even when you force it to update it doesn’t take the runtime mesh into account.

I’m curious how one would use the packed vertex arrays method described on the documentation site. I see your example with the 640K waves which looks great, but I’m wondering how that was done in terms of the containers that were used. I have similar sized container and a simple way of having sections to it, but I don’t understand how I could pack all the vertices into a container and only update the only the sections in questions vertices. Any chance I could see the wave examples code?

Well that’s strange… I don’t see anything special in the PMC to support that, and the collision in the RMC is almost identical to the PMC’s until I get the upgrades done. I’ve not had much experience with the navmesh so I’ll try to take a look at it here soon in my bug fixing pass.

If you change the mesh later do you have to do this again?

While there’s not much to these examples (and I haven’t made absolutely sure they work with the latest version quite yet. I will do that in the next few days though) it does contain everything I used in those images.
https://github.com//UE4RuntimeMeshComponentExamples
The wave specifically is here…
https://github.com//UE4RuntimeMeshComponentExamples/blob/master/Source/RMC_Examples/Actors/AnimatedTerrain.cpp
The Generate() function sets up the mesh including the full vertex information, as well as index buffer, then in the Tick function it recalculates and updates the positions and updates them alone. There’s a similar way to update the rest of the vertex information using BeginMeshSectionUpdate() and EndMeshSectionUpdate() and you can also still use UpdateMeshSection like the PMC but it was adapted to work with the packed vertex.

I’m pretty sure there is still some sort of bug going on here. Take a look at the following screen shots. The first one shows a blueprint actor with a PMC, the second one shows a blueprint actor with an RMC. The final screen shot shows the result when both of these are placed in the level. As you can see the nav mesh is not being effected by the RMC. I’m not sure how the RMC could be registered any different that what is being done with the PMC.

This component is amazing! Thank you. So I’ve figured out how to update the positions and vertex data separately but how do I update the collision? Is collision only to be updated using the convex collision stuff? I’m new to this all so pardon the perhaps obvious questions. If I’m updating the procedural mesh around the player on the fly, how do I update the collision too? I see the convex collision stuff that seems to be for procedural meshes that are moving, but what about static meshes that you’re generating on the fly? Do I need to prescribe them collision shapes as I build the local geometry?

Sooo, here is another fan of this plugin.

Am I right in presuming that the plugin is not coming to UE 4.13 anytime soon since Epic Games needs to merge one of the features into their engine source?
Is there any ETA when this might happen? Or is this still clouded in the dark?

Would it be possible to release a 1.9 version of the plugin, where you exclude that feature, so that we might run it with 4.13 as well?

First off, sorry everyone about the delayed response and the way overdue v2. I keep getting slammed with work (the house is torn apart right now due to water damage) and classwork at the same time for large course projects… I will follow this post with another on the current standing of the RMC and the plan.

Ok, that makes sense, but wondering why wilber is still having issues.

I will try to look into this here soon, trying to finish up collision work for v2, but keep getting slammed with life events and classwork.

If you enabled collision when you called CreateMeshSection it retains that setting and updates any time it sees the positions update, or well it should at least. the convex collision is really only useful if you want moving objects.

The plugin is coming to 4.13 (in fact you can use it now if you manually install it to either the engine or your project). Due to aformentioned reasons I hadn’t actually noticed that 4.13 was out and such hadn’t asked Epic to update it to 4.13 until today so it might take them a bit to work that through. Currently v1.2 which is what’s on the marketplace should be what’s updated to 4.13 for the marketplace until v2 is done.

I’ll explain the other in the next post.