Hey, I just updated the Examples to UE4.13 and RMC v2.0. This is just for a temporary solution until I get the new examples project done. You can find that here: /tag/v2.0
Unfortunately, creating a static mesh component requires some editor only modules that can’t be linked into a package game. Out of curiosity, why is it you’re wanting to convert them to a static mesh? What files are you talkinga bout sharing between the server/client? If you really don’t care about bandwidth, probably the easiest way is to stay within the RMC and use the SerializeRMC() or SerializeRMCSection() functions. With those you setup an FArchive to either load or save and just call that function (it loads/saves based on what the FArchive is setup to do) So you could probably just to a memory archive, serialize it to that, send the data across and feed the archive back into the RMC on the other side. It’s not going to be light on bandwidth but within a LAN you’re probably OK.
To be sure, can your meshes be saved in Editor or exported as FBX, OBJ, etc? And then imported, for example. All at runtime, without Editor. I need a feature where it is possible. For example, the mesh is created, changed, saved, exported, then imported, changed, saved, etc.
I’m not quite sure what you’re trying to do but… UE4 doesn’t support importing static meshes at runtime (they’re basically un-editable at runtime), which is also why the RMC can’t convert to a static mesh at runtime since it uses the import pipeline. You can load/save meshes out to things like fbx/obj with something like Assimp but there’s not a built in way to do this yet.
I was hoping converting to a static mesh would take away the limitations of procedural meshes and/or allow for saving or replication. I’ll either serialize or simply share .fbx files or the like between server-client.
With the same data used for making a procedural mesh, you can make an OBJ/FBX/… file via a c++ library like assimp, which you can import at later times. A more direct and existing way has been done by: https://www.unrealengine.com/marketplace/-save-system
Well static meshes don’t have any replication capabilities of the mesh data. They assume the mesh itself already exists with the game install on both server and client. The RMC has support for natively saving itself, but it’s a custom format meant to quickly load/save itself, not a full interchange format like OBJ.
Are there any other limitations of the RMC when compared to the StaticMeshComponent that you’re hoping for?
Can anybody advice me on how to do a proper lighting with the runtime mesh component without taking a performance hit? I am unable to maintain 90fps in a oculus rift with Geforce 1070 after I turn on dynamic lights and shadows from a single directed light source with a very simple scene - an apartment with 2-3 rooms, where every room is an actor with 3 subComponents - floor, ceiling and walls - all of them are a single RuntimeMeshComponent, although the walls component has many different sections - 4 to 30, depending on the room topology, every section is a flat surface though. My meshes are procedurally generated once at a startup from a text file and never touched afterwards.
The main limitation is that I have to adjust older code of my own that doesn’t take procedural meshes into account. :rolleyes:
I have implemented replication based on the system I already had but it’s for static objects only atm. I’ll look into replication of completely dynamic meshes later, I have much to learn still about the programming side of it but the idea is pretty solid and very do-able.
Have you tried the difference with normal imported movable static meshes? Try optimizing such a scene first for VR performance, it’s not an easy feat making a game perform well in VR while still looking good.
I’m finding performance with procedural meshes to be surprisingly good but having lots of dynamic lights and objects will always be hard on the GPU and is not necessarily linked to procedural meshes (rather to all dynamic objects).
I appreciate your reply. A similar scene with imported static mesh gives me steady 120+ fps. My scene is really very simple - it is just few rooms: walls, floors and ceilings, a single source of light (the sky light) and I’d assume there should be a way to get 90fps. What I think could be problematic is that every room is currently an actor with single RuntimeMeshComponent that consists of a large number of sections. I have mistyped it in my OP, the RuntimeMeshComponent is rather constituted by 40-120 sections. But there are only 4-8 such components in the whole scene. Can this be a problem? Would it be better if these sections were instead separate actors? I absolutely need to be able to apply a material separately to each of the sections, so I either have to make them RuntimeMeshComponent sections or separate actors.
It’s usually not great practise having large sized meshes, here some things you could try (try them seperately and I’d be gladly hearing the results):
-Split into smaller meshes, not sectionized
-Light with stationary sun if not yet so and if it doesn’t have to move (or at least check what influence it has), it’ll still light and calculate shadows for movable and procedural meshes but only for lights that stay on the same location.
-Make a similar scene without runtime components (stationary directional light + movable meshes), if your framerate is still terrible you should look into optimising graphical settings for VR. Unbalanced graphical settings are easily capable of tanking framerate on any kind of hardware.
A single sun is what I am using, yep. And with the same amount of static meshes set to movable I had way better performance, so it’s something about the amount of runtime meshes I suppose.
As far as I know, there is no way to have a dynamic array of components in any actor (or is there?), hence also no way to create a single actor with a dynamic array of RuntimeMeshComponents, so the only way to split the big meshes is to split them into multiple actors. That is, from one actor with one component with 100 sections I will move to 100 actors with one component with one section each. I had tried it with the UEs ProceduralMeshComponent and haven’t noticed any performance difference, but maybe this will work better with the RuntimeMeshComponent.
Hi ,
Just came across this component, and it seems to be what I need. The PMC should be enough for me, but I can’t seem to figure out how to have it accept materials in the editor.
This is a bit of a newbie question, are there any build instructions for Mac OS? What is the best way to integrate into your project? Cross platform?
This is going to be a long post since I’m replying to multiple people at the same time. (Sorry about being a slow response, combination of end of classes and then the holidays)
I do not have any videos, and if I’m honest the docs are pretty lacking at this point, but that will hopefully change shortly! The slicer isn’t going to be in that release zip due to licensing restrictions from Epic. I hope to have a better solution to this eventually, but for now the best way to get it is via the marketplace. The next best way is to pull the RMC+Slicer from here and overwrite the RuntimeMeshComponent folder within the Plugins folder with what’s in that zip. I’m around the UE4 Discord pretty much all of my available time if you need help. Once you have either that version or the marketplace the ‘Slice Runtime Mesh’ function will appear in blueprint. You can also access it from c++. Also you’ll need to have a GitHub account linked to your UnrealEngine account and be logged in to view the link above.
I’m aware of a few potential problems with how the RMC operates compared to the SMC that could be the cause of some of this. This is part of the reason I’m currently doing a ground up rewrite of the core of the RMC. Unfortunately though this will likely be a little while coming, but I’ll keep you posted on that.
Well that’s not good! Will look into that!
@Mtimofeev
There are some issues I’m aware of like I mentioned above where the RMC is slower than the SMC even for movable objects. I’m working on that with the new version. VR is indeed an interesting, and hard, thing to optimize for and even more so when you can’t take advantage of things like baked lighting. Unfortunately there’s not really a way to get that with runtime generated meshes.
I know I already mentioned there being some known issues, but I’ll elaborate a little bit here… I would also be interested in talking to you on the UE4 Discord if you’re willing to learn more about your setup and see if there’s other things I’m not yet aware of.
Now, One of the main areas I’ve found out where the RMC can be substantially slower is shadows. The SMC pre-computes an optimized index buffer for shadows which reduces the number of vertices transformed on the GPU. This is something I plan to do more investigation on and if it really turns out to be of use I’ll likely add support for it in the upcoming version. Next it uses a single vertex buffer for all of it’s sections whereas the RMC doesn’t and instead has a vertex+index buffer for each section which can lead to more state changing. Also the SMC doesn’t use have color information in the vertex buffer if it’s not needed. The RMC is on its way to supporting that (actually the template vertex will let you disable it, but the engine won’t like it since the RMC doesn’t handle it correctly). There’s possibly some other things I’m not yet aware of but I’m still doing a broad and very detailed comparison of the SMC and RMC and how they operate. Some things the SMC does can’t really be done in real-time but some can, and will be.
Now to make you aware of some other potential pitfalls with your current setup… 40-120 sections isn’t by itself a bad thing but there are some major things to watch for. If you create/update any section marked with Infrequent Updates, it will resync ALL vertex/index data to the GPU. Same goes with changing shadowing on those sections. If they’re all Average/Frequent it won’t resync all but I believe changing materials will force a total resync as well. If these are spread out over a large area you’ll likely want to break them up due to culling not being able to remove some of them based on them all having the same bounding box. Furthermore each section constitutes a draw call (actually more if you count shadows or other things like planar reflections or scene captures) so especially in VR you want to minimize that as much as possible while also being mindful of not making too few meshes that are so large as to bottleneck you the other way. While it’s a little more complicated than this, if you’re CPU bound you probably have too many sections.
You can have multiple components as a part of an actor, you can only have one RootComponent. You can add children of the RootComponent though. You can add them and then query them, or you can add them and store the pointers to them as well (make sure they’re decorated with UPROPERTY if this is in C++) This honestly sounds like part of it is from some of the known issues but unfortunately the version to fix much of this is likely a couple months out.
To add materials, you have to use the ‘+’ button to override them and add them. They only show up automatically if the sections exist in the editor (created in construction script)
For Mac OS, I haven’t directly tried to build it due to not having a Mac but to my knowledge you should be able to add it as a plugin within the project and build as normal. The marketplace also appears to have it built for Mac so it should build correctly. The marketplace also has the current version, including the slicer so that might be easier to install than trying to manually build it.
Forgive me for asking but… Is there any Blueprint-only Example Projects for this out there? Or is it just not advisable to use for that?
I can’t view the current example project, because it seems to require visual studio, and I can’t install that again…
Also having a hard time understanding the documentation, without much in the way of examples for the different uses.
Yeah, unfortunately the current example project uses the github plugin directly, which means it needs VS to build the project. I’ve been slowly working on upgraded docs but this obviously doesn’t help much right now.
You can use blueprint only. Almost everything is exposed to BP. If you want to message me in the UE4 Discord and I can help directly and far quicker than here. Fundamentally it works much like the ProceduralMeshComponent which doesn’t have great documentation either though.
Hey any chance of getting an example using the Convex collision? I’m trying to take a couple faces from a RMC that is using complex collision and convert it to a piece that uses the same verts but now has convex collision. I’m doing this with the hope of applying physics.
I’ve set the bUseComplexAsSimpleCollision = false;
I’ve also set the component SetMobility(EComponentMobility::Movable);
After I build a list of vertices I RuntimeMesh->AddCollisionConvexMesh(Vertices);
Yet when I load in the level and set this in motion by scaling the component (to test), I don’t see any collision in the view options, but I do see the visible mesh of the RMC scale.
After a little trial and error I was able to get the convex collision setup working. I have a couple questions for optimization and perhaps your findings so far with them. At any one given time I spawn around 30-50 Actors with an RMC component. Each RMC component houses probably 30-200 triangles, but each triangle I’m creating a convex triangle collision primitive. I use the RMC->AddCollisionConvexMesh(convexCollisionVerts) for each triangle collision primitive which ends up being the same number of primitives as their are triangles on the visual mesh. This appears to work fine up until I start having my peak amount of 30-50 actors. Each actor containing 30-200 triangles and roughly the same number of convex collision triangle hulls, then I start to see some slow down. This seems low to me. Any idea where this slowdown may be coming from? To test the movable collision aspect of the convex collision hulls I just simply scale the actors in and out using a timeline. So all the actors use the same float curve in the timeline.
I just wanted to chime in and congrats you for this wonderful plugin. Great job @ !
Performances are really good and the plugin is very easy to work with.
Here is a little demo of a project where I’m using the RuntimeMeshComponent to load OBJ files at runtime :
Glad to here you got it working at least! I was going to say that enabling physics is probably what you missed.
Now, for a quickie overview on how that works… The convex mesh support there is going to take whatever you feed it in each batch and make a convex hull out of it. This means that any concavity would be filled in, but it will technically accept almost anything you feed it. The problem there is obviously by filling in concave areas you get some weird physics behaviors. What’s really needed is something like convex decomposition which will turn a big complex body into multiple smaller convex shapes. Unfortunately this is usually a very slow process unless you know enough about your input information to generate it directly. See below for the difference between convex hull and convex decomposition.
Now, the problem with what you’re doing is the sheer volume of convex objects. While I’m not entirely sure on how PhysX handles compound shapes (single actor with multiple independent shapes), My assumption is that they’re just not checked against eachother but still checked in turn against all the shapes in another actor. This leads to an exponential growth problem. If both actors have 10 shapes, you have 100 possible combinations, but if both actors have 20 shapes, then you have 400 possible combinations. With your example of 30-200 triangles independently that means upper end of 40,000 possible combinations in collision detecting 2 of those actors together. This gets even worse when colliding against a mesh. PhysX, to my knowledge, uses a BVH tree for static meshes which is why they’re incredibly fast. The problem is moving/rotating a BVH can get costly to update so PhysX doesn’t support moving triangle meshes. UE4 handles this for normal objects by either importing custom collision shapes for it or using V-HACD ( GitHub - kmammou/v-hacd: Automatically exported from code.google.com/p/v-hacd ) to decompose the triangle mesh into a small number of convex objects to approximate it. The problem is this is EXTREMELY slow in some cases (Like hours slow for complex meshes).
Now, not knowing what you’re actually working with I can’t say if there’s a way to easily figure out a roughly minimal set of convex shapes to approximate it fast enough to be usable at run-time.
Thanks! It should actually get faster here soon with the new version! Also it will use substantially lower memory in cases where you don’t need to update something after the first time (loading a static model like you are, or just generate once and use type things) and supporting multiple instances without full memory duplication.