The only benefit you will get in your case is the ability to instance and have many robot arms with static mesh which will perform better than one bone per vertex in skeletal.
One to one I dont think you’ll see any difference.
Static Meshes attached to bones should be more performant since they don’t need to evaluate the bone weights (even if it’s 1 per vertex), but the difference will vary depending on the amount of your robots, arms, etc. You can always spawn many of them: One time with skeletal meshes, one time with static meshes and compare the ms.
In 4.22+, the auto-instancing should make things even better, since it should automatically instance static mesh components if they have the same mesh and material, so it should work across multiple robots in your case.
If you ever worked with vertex animation baking you should know it’s very limited. You can’t blend animations properly, your framerate and polycount are hard-bound to texture resolution etc.
Well based on the OP there would be little difference between the two as vertex animation is now hardware rendered with out consideration as to purpose based solutions but from a TA perspective it’s best practice to avoid manhandling static meshes directly.
But you would have many more static meshes. A single skin is one “draw call” whereas a series of ten separate rigid articulated static meshes is ten “draw calls.”
(Of course, each rendered mesh is actually many calls for many passes, and there may be cases where the engine can share multiple items in a single issue, but it’s still a consideration.)
The only way to know for sure here is to build it and measure it, on the actual hardware and drivers you are targeting.
Because there’s limitations.
It’s never been as simple as just making a material shader.
Also now that tessellation is gone the base model has to be the right amount of tris to start with. Which performance/lod wise sucks.
Before you could use distance based in the material to cull tris and it would work flawless mixed with shader animations.
Now you can’t.
Automated lods can actually move very, very differently from your coded movement.
There’s other reasons too. Watch the vid.
There’s also Morphs. You can rewrite the engine to allow static meshes to have morphs…
Indeed, I thought that the 4.22+ auto instancing would merge the calls if the meshes are the same (e.g. same re-usable modular robotic arm part) and have the same material.
It’s still multiple calls VS just one for a skeletal mesh.
The performance overall would depend heavily on both scene composition and end hardware.
That also why there’s a merger for skeletal mesh parts. Too many parts (which also have multiple skeletons) leads to more calls and worse performance.
Generally speaking, anything you can do in 1 call on the GPU will be way faster than anything you do on the CPU for most things.
And the lower the amount of calls the better off performance is.
(This extends to having single material slots and proper models instead of 20 material slots on a single model)
From what I’ve tried, moving/transforming individual instances of a ISM/HISM component is extremely costly and causes major stuttering - seems like they are not meant to move at all in a scene, though perhaps there might be a specific way it has to be setup and done.
It’s some BS stunt like apple removing the 3.5mm jack from phones.
Nothing more, nothing less.
Their claim is that nanite can now handle googleplexes of tris, therefore, just use a terrain that’s already heavily subdivided instead of using an algorithm to change it at runtime.
It’s not a bad theory. But we all know that in UE5 you already only get .0009 FPS. So it’s nothing but a publicity stunt.