I’m trying to leverage occlusion culling and static lighting on static mesh actors to maintain frame rates at about 60 fps. The focus in my content is purely about the environment, not the game play, so working to max resolution in the geometry and textures. Normal maps will do the heavy lifting on higher frequency detail, but preserving silhouettes is also important. My photogrammetry software allows me to export a large chunk of a set in as many smaller fbx parts as I like, based on max vertices per part. In addition, there’s control over the number and size of textures for the chunk, such that say 100 mesh parts might point to 10 or 20 4K textures (which make me wonder if 2-5 8K textures or one 16K texture wouldn’t be better).
I’ve tested the workflow exporting two big related chunks, over 100 fbx parts each, one with five 4K textures, one with ten. I’m watching the stats, frame rates 70-110 and depending whether the camera is looking dead into a wall or taking in as many walls, ceiling, and floor as possible, I’m seeing between 20-160 ms mesh draw calls, very occasional spike to 250 or so, but it seems the strategy to leverage occlusion culling as balanced against not overloading CPU with draw calls for each mesh is behaving well. I’ll implement Level Streaming Volumes as well.
I’ll continue testing this way, pushing against the variables, but I’m challenged to extrapolate what a pattern at this level of development portends for down the road, when the set grows to full scale. It’s my hope that Level Streaming Volumes will serve that purpose, optimize performance/quality at one level, apply consistently to all levels, let new levels upload and old ones offload accordingly. I’d so welcome perspectives about gotchas, how far not to push draw calls, how to balance number of texture maps against size, and anything else worth considering that I’ve not touched on with environments featuring no reused meshes or textures.