Btw, crowds were already possible in 2008 with UE3:
Softbody physics too huh, dang.
We are not part of Epic games. We are in the same boat as you. Don’t expect A+ support from us. Don’t get mad if you don’t get an easy 1-2-3 answer. Everybody in this thread has taken the time to give you answers. There’s zero reason to be mad, especially considering you haven’t come up with a solution, yourself.
There is an abundance of documentation, resources, samples, and demos available to you. The Elemental Demo, the very first demo of UE4, is even available to you for free, so you need to put more effort into finding and solving these things and not expect somebody in a forum to give you a perfect solution on demand. There is no magic “make game” button.
And a third thing, Unity and Unreal are two completely different game engines. The the way things are done in Unity are different than how they’re done in Unreal, and vice-versa. Don’t expect there to be a 1:1 correlation between the two engines.
As regards expectations. This isn’t UDN. Its more like the Indie ghetto for Joe-The-Fortnite-Streamers. So feeling self-entitled or expecting complete solutions is unrealistic. You’ve been here since 2015, so you must have seen a falloff in engagement / support on here from 2014-2016 to now? The forums are on life-support. Most posts get ZERO replies.
Most senior or super-experienced devs (those with lots of badges) left ages ago. Who can say for sure why. But many posters who create threads now just expect others to make their game for them, while giving little or nothing back. That’s killing the forums even faster. Stakeholders at Epic don’t seem to care either sadly. Will UE5 change any of that? Who knows.
That’s either Detour or RVO. Both are already available in the engine for you to implement.
This is also interesting to check out GDC Vault - Instancing and Order Independent Transparency in Total War: THREE KINGDOMS (Presented by Creative Assembly with Intel)
They explain there how LODs work in Total War.
But guys are generally right. You will need to use instanced static meshes with vertex animation for best performance and then switch to skeletal meshes will low tris and low bone count and swap these to better skeletal meshes when close. This will help you with Draw/GPU time optimization.
As for the game thread performance (this is what DOTS handles in unity) I don’t think you will be able to use Recast navigation for thousands of agents. For that you will probably need something Less Precise but with better performance. Same goes for AI, BTs are not built for thousands of enemies so you might need to come up with something with better performance (for example using some shared AI logic for multiple chars). For thousands of enemies you will also need to replace character movement component with something custom, likely less precise but better suited for RTS. For skeletal animation performance there is already animation sharing.
For RTS it is most of the time about hiding imperfections and using some hacks, that might not be best for main general out of box integration which should be provided in the engine. But of course, there is always room for improvement.
Niagara is internally close to how DOTS work, that is also the reason why it can handle large amounts of particles simulation.
This plugin could help you with vertex animations.
Yes, like you said I’ve been here for 5 years I know what to expect, I know the community and I understand very well.
Also not mad, it’s a figure of speech, I got a little pis-sed when Arkiras assumed like many others whenever a problem comes up, that everyone who uses UE does a first person/third person/open world, in which case it is techincally easier or more straight forward as an approach to make crowds given culling, LODs and many other tricks one can use, even AI/nav mesh degrading in the distance.
Its so much easier to solve so many problems when you have a camera distance.
But no hard feelings, Akiras and I are now best friends, inseperable I can already see him/her running towards the sunset in the horizon.
From people I spoke to on a senior level my concerns on this part of this big engine is closer to the truth, they all do admit that there is a fundemental issue here with how the engine deals with instances of skeletal meshes. They said yes there is a problem within the engine out of the box, the enigne is not built for it in mind as a basis which is why you need to do your own system over it.
They didn’t say you are doing something wrong and you should do X. they said yes there is a problem there you want a solution pay up
I just thought looking at Unity, they seem to have a figthing chance without going too deep.
I’ve read and seen most of what the community has suggested here so far before (except the UE3 one I hadn’t seen that before), I suppose I was looking to find a hint or an answer for something I didn’t look into before.
But thanks again for the engagement guys.
@Arkiras The example you showed with the 300 skeletons above, do you have any special settings enabled/disabled or changed in the project settings? I also assume you are using the character actor class for the units.
As I recall, nothing was changed. From what I remember this project was originally made for 4.22, I opened it up in 4.25 to take that screenshot.
I’m using the built-in character class, with the only change being that the movement component set to use ‘Controller Desired Rotation’ so that the units don’t snap-turn. The behavior tree task selects a point within 15 meters to run to, but increasing the distance doesn’t have any measurable impact on performance (tested up to 150 meters).
The characters have some additional logic for grouping up in formations, but I never finished implementing it so it is unused in that screenshot.
By far the most common bottleneck when running large amounts of units is driving the skeletal animation. But depending on your characters, your bottleneck might be different, for example if you were to try this with the paragon characters (assuming equal vertex count) you most likely won’t be able to get the same amount of characters simply because they have a much more complicated animation graph and maybe more importantly, they have 14 materials each and you’ll blow up your draw calls.
In other words: Content that isn’t designed to scale well, won’t scale well.
If you didn’t want people to assume that, you probably should not have linked a video demonstrating a crowd from a horizon view. Personally I still think this is irrelevant, I would guess that most (if not all) 3D strategy games use LODs, though maybe not as many.
Yes well that’s really the whole problem here isn’t it? You started with the fundamental assumption that Unity is doing something special, but never attempted to describe what that was. So how do they do it? For something official, let’s take a look at the animation instancing blog post they made: Animation Instancing - Instancing for SkinnedMeshRenderer | Unity Blog
TL;DR: They write the animation to a texture so they can pass it to the GPU and read it in the vertex shader. Sounds really familiar… There’s a common name for that in the Unreal community: Vertex animation textures, the same thing you’ve been repeatedly told in this thread by multiple people.
What about the Unity marketplace? This one looks really impressive. How’s that work? Turns out it’s writing the animation to vertex animation textures. Everybody loves vertex animation textures. That mobile video you linked earlier? Reading through the description it looks like it is using vertex animation textures (based on the fact that you have to “bake” the animations), but I am not entirely sure.
Perhaps the tools Unity has available makes the process of working this way more transparent to the user, I don’t know. Admittedly in Unreal right now it is pretty cumbersome and requires a decent technical understanding.
First of all the tool can only affect a maximum of 8192 vertices in a single 2D Texture. This is because the maximum Texture size a Texture can be for DirectX 11 is 8192 pixels in either the X or Y direction. The tool generates the data in the Texture using the following formula.
This is taken from the doc, this means any vertex baking done in UE is stuck to at best 2048 polycount on each character, correct me if I’m wrong.
This doesn’t surprise me at all. Even on instanced meshes shadows can be expensive. The first piece of advice I give everyone trying to optimize their foliage is to turn shadowcasting off on grass.
Not sure how you got 2048 from that but regardless, In my first reply I linked to a video by Jonathan Lindquist stores data differently than the regular VAT. I think it should allow you to drive much higher polycounts but it has no documentation that I could find and unfortunately it is a maxscript, I don’t use max so I’ve never been able to test it myself.
Quote from one of his replies:
It’s 2 pixels per captured bone frame. One pixel for rotation and another for position. So the system can drive high res models without incurring high memory costs.
Houdini may have something similar, it seems to be the go-to tool for tech-art these days. But again… I don’t own Houdini so I can’t help you there.
If you want to see this actually implemented in an example, it is used in the scarab effect in the Niagara content samples.
It’s 255 vertices (310 triangles) with animations stored in:
- 1x27 rest pose texture
- 51x27 (x2) for the wing flap animation
- 15x27 (x2) for the run animation
Thanks for the links, what i’m trying to say is that so far all the vertex animations i’ve seen in UE4 using this method have that limitations, including all the Niagara examples and the doc mentions this.
they are extremely low poly, extremely. I am asking if this is the limitiation of the method.
- 8192 vertices stored per texture to animate one mesh with one movement. One polygon square has 4 vertices so that makes me assume 8192 vertices divide it by 4 will bring you down somewhere around 2048 square polygons.
Does this mean now you have to stack up multiple large res textures to get enough animation movements or blending you want to do.
I will check the Houdini examples later.
This isn’t how meshes work, for the purposes of rendering the only “polygon” that exists is a triangle. Everything is broken down into triangles.
A quad (square polygon) is just two triangles that share two of their vertices. Every additional triangle you add only adds 1 vertex, because the other two vertices are shared with another triangle.
Vertices are duplicated along UV and shading seams, and it can be forcibly broken to separate different objects but the vast majority of the vertices will be shared.
Assuming a single smoothing group, the real upper limit would be around 7500~ depending on how many UV islands you have.
Correct about the shared edges for the other triangles I missed that when i was dividing it , I also understand that we should calculate in triangles but for the sake of calculation i went with a square poly.
7500 still pretty low though, adding islands can get real messy, this means for every animation there is one texture baked so if you are doing anything more than a run and a stop or maybe a walk, you are in for one long material texture setup and then full stop because materials eventually can only support 16 texture samples limit.
So making anything close to an animation tree in a material is out of question in this case.
Yes you are.
You can get 128 unique textures using shared samplers.
Plus you can just store multiple animations per texture. This is what was done in the Niagara crowd video.
Then use the static mesh skeletal script I’ve mentioned above
Ok thanks will keep lookin.