Understanding Game Poly count.

I currently am on my second year at my college for game Development and design.

ive made 2 of my own games so i know the basics to all sides of making a game. from prop modelling, scripting, texturing, map layout …etc

My Questions is how in the world newer games get away with so many props across their maps. BO3 zombies for example, Kino: the map is littered with physical debris. the walls are anything but flat and the small props are really high res for a small prop. The details are in every thing, even in places where there’s zero attention so why is everything so high poly, and how???

i just have difficulties understanding how its possible without major performance issues. Hope this made sense. Any help would be greatly appreciated.

Simple. Good LODs and proper memory optimization.
on top of instancing, to cut some corners.

Most modern GFX handle heavy poly counts better then they handle transparencies.
The material shaders used on the items also do add some weight, so overly complicated materials on items are not a good idea.

Now, with tons of skeletal meshes like a zombie horde you could potentially incur similar issues as you would on a low end pc with many poly. For this, usually complex material shaders are used in substitution.
modifying the mesh by a material distortion is faster then using the bones. A blatant example with some youtube talks you can watch on that is Abuzu.
the game may not be everyone’s cup of tea, but it is beautifully engineered on a " get millions of animated things on screen without performance issues " standpoint

Back in the day, the super buggy assassins creed Unity was the rage for massive skeletal meshes (The one based in Paris).
here’s the relevant talk on how they made that.

The key to handling a large object count in scene is understanding draw calls. Modern GPU’s are very good at rendering tremendous numbers of triangles and have dedicated hardware to do this, so while your polycount matters, what matters much more is how you get those triangles onto the card.

I’m going to simplify things a bit here but the concept is important to understand.

Whenever you want to draw an object, you have to tell your GPU that you want to draw it. We call this a “Draw Call” or a “Draw Primitive Call” (sometimes abbreviated as “DPC”). The speed at which you’re able to render a scene is going to hinge a great deal on the number of times your CPU has to talk to your GPU to do it.

Let’s say, for example, that we want to draw a scene with a ton of spheres in it (729 spheres with a single directional light and an exponential heightfog actor, baked lighting):

If we had to tell the GPU individually every single time we wanted to draw one of these spheres, it would take a while to do it.
“Hey, GPU! Draw a sphere! Here’s the list of vertices and its material. kThxBye!”
“Hey GPU! Here’s another one! Here’s its vertex list and material. Thanks!”
“Hey GPU! Here’s another one! And its vertex list and material.”
… you get the idea. Bad scene.

The answer to this is a process we call Instancing. Instead of telling the GPU each and every time we need to draw one of these spheres, we tell it once, and give it a list of transforms at which to draw it.
“Hey GPU! I need you to draw a bunch of spheres! Here’s the list of vertices for a single sphere and the material applied to it, and here’s the list of locations at which I need you do draw it!”

Now we’ve just dramatically reduced the time it takes to draw that scene because we’ve optimized the slow part of it. Drawing a ton of triangles isn’t a problem - the GPU has dedicated hardware on which to do this and can do it blazingly fast, but telling the GPU what it needs to draw takes time.

In days of yore (before UE4.22), if you wanted to instance your geometry, you had to tell the engine to do this. Usually this meant creating an actor with an instanced static mesh component, or it meant using the Foliage tool to place your actors, even if they weren’t foliage, because the Foliage tool instanced them automatically. In 4.22, however, Epic added automatic instancing, which is a wonderful thing. Now, whenever an object appears multiple times in a scene, the engine automatically bundles all of its instances into a single draw call with a list of transforms.

In the scene above, we’re looking at 729 spheres, representing 1.4m tris, and we’re able to draw them at <4ms on a rickety old 970 card. We can see by the Stat RHI output above that it took about 140 drawcalls to draw everything in the scene.

If we use RenderDoc to look at the actual list of drawcalls used to draw the base pass, we can see that it handled all 729 spheres with two calls. It drew 512 of them in the first call, and then drew the remaining 217 in the second call. The rest of those 140 drawcalls were some basic housekeeping and a side-effect of the editor still running at the time.

This is how we can draw a scene so fast.

Now, things to know about managing your drawcalls, since you now realize that it’s a more important number to look at than your polycount (though don’t go crazy on your polycount if you can avoid it - good optimization requires you to be efficient everywhere you can be.)

  • If an object uses a different material from another object, it’s going to have to be submitted in its own draw call. Everything in a single instanced drawcall must be identical (though each object can have its own location, rotation, and scale).
  • If an object uses more than one material slot, each material slot will add a draw call. That can add up. Use as few material slots as you can get away with.
  • Combining meshes can help with your draw calls. If you have a house you constructed using modular walls, window parts, door knocker, etc., consider combining it into a single mesh so it can be sent to the card more efficiently.
  • Use Stat RHI (render hardware interface) early and often to see what kind of numbers your scene is generating.

There are plenty of other factors that can impact your scene’s performance as well - transparency and overdraw can get rough in a hurry, dynamic lights can be killers in a forward renderer, shadows are always evil beasts, but if you start out your understanding by really learning to think in terms of managing your DPC’s, you’ll be ahead of the game.

I wrote a bit about this a while ago that covers much of the same stuff we just talked about here: Merging Meshes in Unreal and Why It Matters — Manic Machine

Hope this helps!