About a week ago I posted a question on Answerhub about performance, at the moment the game I’m working on is an openworld RPG. Which of course is a huge feat but we have a decent sized team, the major struggle all the way through has been performance.

So before we scrap the game and do something lightmap / level based, I was wondering as a last ditch attempt if anyone can weigh in?

Our target specs require our game to run 60 FPS on a GTX 470 and look decent, if you drop scalability settings below medium the game looks pretty poor. On Epic it looks… Well Epic. I’ve had a lot of performance issues with UE4 in general, Sci-Fi corridor on a 780M @ 1440P runs at 20FPS on medium settings and even less on reflections demo, cave demo, elemental, our game runs about the same except our game is massive using dynamic lighting / RT shadows which is a surprise really.

I’ve messed around with LOD’s, scalability etc. but it doesn’t really have much of an impact.

Bare in mind I’m from a Unity background really and this is how I’d make a scene “production ready”, I’d use Umbra to bake occlusion. Modify the amount of cascades / shadow quality to boost performance (in their (scalability settings)), scale specific things like detail resolution for “grass planes”, change pixel error of HM if it helps and base map distance. I’d usually use the likes of static batching which always helps ALOT, control billboard distances on the fly. Which ultimately brings everything back into order…

Maybe you can do all the above in UE4 and I just don’t know or understand the way it works? But it would be nice to discuss some of these features or we can come up with some solutions, any help appreciated.

If you haven’t watched the GDC Open World demo video then make sure to take a look at that since they deal with performance stuff.
For something like the GTX 470 I’d turn off some things, like SSR, which wasn’t in games at the time that graphics card came out. If you haven’t setup batching/static mesh instancing then look into that, though that depends on if the number of objects is one of your issues. Check your stats during gameplay to see what things are taking the most render time.

It appears most of the time it’s fill rate causing the issue, GPU’s struggle to handle the size of the G-Buffer. Dropping resolution with a DR does of course increase performance dramatically, it’s just I have to think of a worst case scenario and this is where any mistake in optimisation and lack of understanding I suppose in places causes poor performance.

I still need to test 4.7.3 properly as I tested it briefly and ran much better. Although that could be something to do with getting a new laptop with a 980M, I need to test it on my 780 desktop and 470 test platform. Thanks for the call on SSR as I eventually need to get all this to run on PS4 too…

Yeah I’ve been using instanced static meshes to reduce DC’s, it’s more work than Unity must admit :D. But not that difficult to do, I will definitely check out the GDC video.

P.S thanks for the reply, appreciate it.

Have you check out the following page on performance debugging? This could help you to better narrow down what is going on.

Thanks Sam appreciate it, I’ve been through performance statics / profiling etc. I think it’s just the way the game is built… It’s too big in every sense of the word, I’ve gone back to the drawing board and scaled the game way back. The blockouts / base levels with materials run @ 120FPS on a 980M, so can’t complain.

A smaller level based setup using lightmapping has sparked our fire again, looks much prettier and runs so much better.

Again thanks for everyone’s input.