Laggy map, profiler file available… please help me understand. The following link is my profiler file. Why do i even in a room with 100 average draw calls and 500 max… only get 40 fps on a gtx 590… It makes no sense…
All of the lights are static baked. No realtime. Almost all geometry is static. Only 1 gpu and its a gtx590… does that count as 2 gpu? Nothing else was running in the background. Even have precomputed occlusion on.
Another thing… its not just spikes that are the problem… its a constant crappy fps and ms count. It never gets to a good level… even in the least complex rooms. The profiler data is probably in a spot with 900 or more draw calls.
You have a large number of objects in the scene, sometimes over 1,000 which may be the cause, if you have some simple meshes consider how you might merge piece together rather than constructing them out of multiple meshes.
Your GPU is actually two cards put together, so it’s only doing half as well as it could since UE4 can’t take advantage of that.
I already did merge a LOT together… not much more merging i can do. How come i have poor performance even in a hallway with nothing in it? That hall way only has 216 draw calls. Also unreal has a tendency to make the max draw calls count go way higher when u rotate the camera or move slightly… hence the bathhouse picture… that 1300 number will only be there for a moment i beleive… but the fact that it goes up every time u move a little bit … is just odd to me.
Epics 3000 draw call infiltrator demo runs almost as good as this scene at 1000.
For instnace on epics scene the draw: is 0.74 ms… as compared to mine being 30 ms…
though the gpu is at 66 ms on epics. Definently get less fps on epics scene as well but thats to be expected with 3000 draw calls. And its not lower by that much… much. were talkn a 10-15 fps difference with an additional 2000 draw calls.
Just entering the discussion for learn and try to help.
"You have a large number of objects in the scene, sometimes over 1,000"
Do you mean in the entire world? because the Occlusion Culling should solve that, right? otherwise i will think twice before go “modular” approach. Some interior pictures obviously doesn’t have 1k objects ( I am seeing the screens).
Anything that was modular has pretty much been combined into seperate combo peices with the merge tool in the editor. Besides… most games these days have that kind of draw call count… its a pc game… a 3.5 ghz i7 can handle 1000 draw calls… And as you and I both pointed out… bad performance in a place where there are 200 draw calls… i expect more then 40 fps in a hallway with a gtx 590… even if it just functions as a gtx 580 in this engine.
looks to me like it is the occlusion queries since in each shot, the RenderQuery Result is so high. So even if most of the meshes are being occluded it still had to determine which ones were occluded.
Are you using anything like a cull distance volume to reduce the burden on occlusion culling? Ideally you need one of those with very carefully tuned settings. I usually start out by doing something like this and then just tweak each category until popping is less obvious:
and so on etc. Obviously those numbers are picked at random but at not far from what I would try. Once objects get up past a few hundred units in size (as measured by the bound radius) you need to push the distances way back depending on your scene.
Thanks Ryan, I will try that or some of it. I have also tried the overide precomputed visiblity which helps a great deal, though it becomes very tedious. Most of my problem was on the gpu end though. Though reducing the amount of things rendered by culling more and removing my jungle of bamboo trees helped a lot. I also lowered quality settings to get my target fps. I will try more, but overdraw definently hurt a lot.
Odd though that draw ms lowers to like .3 or .7 when it says streaming textures at the bottom right, but it jumps to the same value as gpu ms, which is around 20-35 ms, when it is not streaming.
I would imagine that most profiling oddities like that are not accurate as far as what is going on unless you know the full story. Maybe it means it was stalled waiting on something else for example. It probably doesn’t mean that the true cost is that low.
My map is constantly streaming 500 -800 textures… lots of blurry ness all over the scene…
Problem is its streaming 700 textures in a room with 150 draw calls that looks like it uses maybe 15 textures max. And if its taking in other things… still doesnt account for that many textures… and im standing completely still in teh room without moving just to see if it lowers… it constnatly stays at the same high number… never budging… and obviously as i move around my environment everything is a blurry mess. What would cause this? There have been times during development where streaming wasnt that bad, and other times when its like this for no apparent reason. Any ideas?
First step is to type ‘stat streaming’ and see how far over budget it says you are.
Then I would take a look at your level in the content browser, right click it and select “Size Map”. Then you get a visual breakdown of your texture reference situation. You may find a lot more than you realized.
LODbias is definitely a good start. But in general you want to look out for things that can be cut as much as possible, or remade using shared textures somehow. You may find that you have multiple test versions of old textures all being loaded for instance. I have seen things like that on every project I have been on.