One full body mesh vs modular character?

Hello,

I plan on having many NPCs (maybe 50 or so) at the same time. I have a full body mesh (20k polys) that I plan to modify to add some variation to the NPCs (maybe I make 20 variations with slight changes that should not affect the mechanics of the character).

I was wondering if it would be better for me to use modular body or one mesh for the whole body. My character and the NPCs will not be able to modify their body appearance in the whole game, but they will be able to equip different type of armor.

Is it better to use sockets to attach the armor over the body, or should I replace the body parts with the armor in a modular character?

This armor equipping would only happen when the player requested so through the inventory and it is assumed it would not be done “en masse”.

As far as I understand, draw calls would happen in the modular mesh when swapping body parts for armor. Would this be the case for one mesh only with the armor attached to sockets? Also, in this second case, we would also be rendering the underlying body that will be hidden under armor, so I guess less performance??

Thanks.

Are the 50 NPCs on the screen all at the same time?
If so, you may be able to detect a difference in frame time between “single skinned mesh” and “5 different skinned components per character.”
I doubt it will be a big difference, though.
If you only have a handful on the screen at any one time, it’s unlikely you will see any difference.

If it were me, I would go with the modular character approach, but keep the number of components small – 5 or so. If it ends up being a real problem, you can always go back and create unified meshes for each look, later.
But, chances are, the time it takes to create the game and all the assets, will be your real challenge, not some small fraction of a frame rate, so I’d go for whatever is easier to create up front, which I think is the modular characters.

I will have 50 characers at the same time, because I plan on adding massive battles. I’ll even try to go higher than that.

I thought that modular characters, for 50 NPCs would increase draw calls? Or do those only happen when the character swaps the meshes? For example, if I pick up a chainmail from the world and I replace the torso and arms (same mesh) for it, would this be only one draw call? Because this wouldn’t happen constantly, only when player or NPC changes clothes (which is a few times).

Are there any other costs asociated with each method?

With one mesh for the full body and attaching the armor in sockets I imagine that I would have to render the body under the clothes too. So if the body is aprox 15k polys, 15k * 50 =750,000 polys more rendered at runtime.

And when we attach the weapons at sockets, aren’t we also drawing calls? So its the same?

I gues that modular also helps in animating, since there won’t be overlapping. With respect to the number of meshes, how about 5 (torso and arms, legs, feet, hands, head)?

Therefore we would go for modular? If so, do you know of any good tutorial to rig modular characters in blender?

Thanks

Yes, but “draw calls” by themselves isn’t a very big deal.
Especially when using modern APIs like DX12, they are highly optimized to be able to issue tons of “draw calls” with little additional overhead.

So, 5 “draw calls” instead of 1, per character, for 50 characters? That’s 250 instead of 50. Compare that to the 10,000 or more that you’ll see during a single frame, and it’s unlikely to be something that shows up on a profile.

Also, the cost of a “draw call” varies wildly by what kinds of things change between the calls. If all your pieces of clothing use an instance of the same material (just with different parameters), those “draw calls” will have much less overhead than if some of the pieces use a different material/shader.

Also, I put “draw call” in quotes, because a modern engine like Unreal re-transforms a lot of the geometry, and also sometimes re-renders it for things like culling checks or early Z. And with ray tracing, the mapping between “draw call” and actual rendered geometry becomes even murkier!

I would not create one mesh for the body and then add a bunch of meshes for the armor. That would totally undo the benefit of one mesh for the character. Instead, you’d pre-compute all the variations you’ll be using, and select the right pre-computed combination. But this is only really valuable if you find out that you’re “draw call” limited.

Static but oriented meshes for swords, shields, and maybe helmets aren’t quite the same, because they don’t need to do skinning, but, again, you should probably try to make sure they all use instances of the same material to reduce the possible overhead. You might also find that it’s marginally faster to get the world space socket position of each hand, and create a single instances mesh object that draws all the weapons at once, if you really want to push it (but this is likely to start paying off at “hundreds, or even thousands,” not “50.”)

Sorry, I don’t know a good tutorial for rigging modular meshes in Blender – most of the Unreal character tooling is for Maya.
That being said – apparently they’re building weight painting and rigging into UE 5.3 itself, so maybe that’ll help. We’ll have to see!

Five meshes (head/torso+arms/hands/legs/feet) per character is rather common. If you start to run into performance issues later, you can cheat a bit and combine some of the more frequently worn outfits’ pieces into a single mesh flagged to replace all the gear slots. Like, say you’ve got a huge battle with 40 guards wearing the same uniform. Sticking them in a shared, single-mesh outfit would take some load off the gpu.

With respect to draw calls for textures, only different textures provoke different draw calls? Changing the parameteres of the textures through nodes in Blender (opacity, roughness, color, etc.) in the same material does not provoke more draw calls?

So if I have many gambesons, I can just use one generic gambeson texture and then modify the parameters for each individual gambeson, and then this would only be one draw call for the material that they all use?

I guess I’ll end up using a modular character. I understand the textures work the same way? How would I do this? One texture for hair, one for face, one for body? If I paint over this, would it be considered a different texture? Let’s say I want to paint a grey color for the skin where the beard would be, would that be considered a different texture?

If this is the case, maybe it would be better to buy some pre-made skin texture and just modify the parameters of it?

Any parameter change will change what you think of as a “draw call.”
A parameter change is “bind a new texture” but also changing something like a color or scalar parameter.
However, that “draw call” becomes cheaper, if the material is the same, and the parameters are the same kind.
Thus, “draw call” with a material instance that has one texture that’s “normal map, 2048x2048, DXT5 compressed” will be faster if it follows another material instance that has the same texture size and format.

However, all of this feels a lot like premature optimziation. Assuming that you’re building a world with a landscape, tons of trees, shadows, perhaps ray tracing, lumen, nanite, … the difference between 1 and 5 components for 50 characters on screen probably won’t even show up in the profile, and if it does, it won’t be in the top 10.

Yes this is rather premature optimization. I want to start doing things well from scratch.

When I said I wanted 50 characters on screen, it was an arbitrary number. I’ll go as high as I can (hundreds if it is possible).

I have been looking into techniques like UV Atlas maps and String textures. I believe this would conviene me at least for the characters since I could have many cloths with many different normals and colors that would come from the same texture.

I have seen a guy do it in his devlog:

but he doesn’t give many explanations (or at least for a noob like me it is hard to follow). But using his method I could have as he says some hundreds of characters at the same time that would only make 1 draw call for their texture material and would have customized appearance and armor. He even says that you could make the effect of hitting one texture or another have different effects. Of course there would be other matters to optimize (movement, AI, etc.).

There aren’t many resources out there with explanations on Atlas texture maps, much less on Unreal. And even less info on string textures which would seem are better than Atlas texture maps.

With respect to nanite, I though that was optimized? So if I have many nanite elements that share an Atlas texture map, it will still not be optimized?

Also, with respect to ray tracing, lumen, sure they are nice but I am prioritizing performance, so maybe I do not use them.

“optimized” is relative. Nanite uses the available CPU and GPU power of modern systems, to attempt to deliver a really good art path and a really good runtime smooth LOD experience. If you were to run Nanite on a system that’s 10 years old, it would either fail, or grind slowly. “optimization” is about reaching the goal you need to reach, with a minimum amount of spent resources, where “people time” is just as important a resource as “GPU cycles” or whatever.

“draw call optimzation” was super important on a previous generation of hardware. If you’re using D3D12 as the API (or the console equivalents) then the cost of issuing “another draw call” is very low overhead, if that draw call re-uses the same shader setup, just changing some parameters like “tint color” and “transform matrix” and “texture pointer” – note “texture pointer” means that it’s another texture of the same dimensions, format, and MIP setup. If it’s a texture of a different format, changing that could be more expensive (or not, depending on specific hardware/drivers.)

If you want “the most possible characters,” then clearly that number is more like a thousand. But you have to make a bunch of trade-offs to get there – probably animation is going to be much more of a problem than draw calls, and maybe physics simulation (you don’t want them to sink through the ground, or inter-penetrate.)

But, you can’t solve all problems at once. It’s much better to focus on what problem you’re wanting to solve first, and then have a backup plan for other problems you might or might not run into down the line. Draw call optimization feels a lot like one of the later, whereas “making enough good looking characters as a single developer” feels a lot like the former :slight_smile:

I have been considering this myself, how I would populate the map with NPC while player characters collect various sets of equipment types for the body.
This is what I am currently considering basically is just having npc with complete armour models swapped when needed on map, as for the player character I am designing a body type, moulding armour pieces to the particular shape than porting in all pieces of equipment for the body shape I’m using and setting up a controller with an inventory system to swap out items on the character paper doll that will effect some type of skill and stats system.
So I’m only setting up one character the player which the body is the same shape but the heads can be morphed into other faces so the player can still choose player looks without effecting the body shape for equipment changes, so what I intend to do is fully rig a male & female skeleton to change amour pieces & other equipment while all other NPC’s on map just are ported in complete with gear and I just associate drops that the player can wear on the player avatar, so all pieces of armour & weapons are tied to one character that’s modular which is the player character, the npc just need representation unless you plan to have them change equipment as party members, then in that case they use the rig you made for the player controller then you set AI too it that the player controls.

All easier said than done… I really have no clue how I will achieve this but I have enough resources to give it a go.
I’m no technician just someone trying to learn this very complex program, just want to learn how to make games even if I don’t sell one.
Basically I been spending most my time converting resources into FBX format to be used on epic skeletons, I have to reset skeletons for animations and socket all items to the skeletons, so I have to port over my library from another editor, find out what works & what doesn’t, so far one character I haven’t optimised and still adding files to is over 200gb in size, so using high resolution characters takes up some space, the fact I ported in the same body 100 times doesn’t help, but I’ll delete them all once I have all parts fitted properly in Unreal as you can’t always tell from other editors if gear fits right till after porting the body shape. Items need to be fitted to the skeleton properly or they port in messed up. In the one editor I’m porting from if items are not fitted to the body correctly they port in unattached in different positions as seen below in the pic


There seems to be a difference to Fit To Body option in DAZ3D rather then just parenting an item to the skeleton which doesn’t work when converting files for export to UE. Any items that don’t Fit to Body in this case must be exported over separately and socketed to skeleton in UE which really is time consuming to just add buttons to a shirt that had not been fitted properly.