Above 60FPS, should it be kept?

Alright hello,

So lately I have been working on a new title which is a racing game and leaving my old game archived until I’m ready to get back to it, So I decided
to revisit my game which is a platformer called “Ever Lasting Joy” and the game plays at 60FPS but the game is in a non-optimized state where it
works it plays from beginning to end but not a lot of attention is given to proper framerate.

Both my racing game and Ever Lasting Joy are 60FPS games.

We all know Framerate is important in games so I was doing some optimization on the games most intense and demanding level the framerate is
at 60FPS to around 45FPS the big drop is due to the Grass basically and I decided to do some optimization to see how much Frames I can liberate
for the game.

Originally I was aiming to reduce the 45FPS drop and leave 60FPS as much as possible but the optimization lead to much better results the game
I decided to unlock the frame rate ended up going much higher from 60FPS to over 90FPS. “Now please keep in mind this is decent optimization and
its not the tightest, I still feel I can do much better” so it does drop to around 80 and 70 sometimes even 60 and goes back up again the problem is still
the grass but overall the game sticks to 80 and 70FPS which was a surprise to me because I didn’t think I would pull of that much.

Foliage is always a problem my games are built to run extremely efficient and 90% if there is a frame drop its because of grass on that level.

So the question is should it be kept or just lock it on 60FPS? the game is a open world platformer since its a platformer its… quite tricky to nail the
gameplay right since it depends on how fast the character moves and the character does move fast this will also affect my AI which are also fast and
even tricky to aim at because they move so fast.

Will a higher framerate be displayed just fine on any screen or do the players need to have screens that can do it? My monitor is 60Hz so I don’t
think I’m seeing much in action sure its fast but I don’t think I’m seeing much.

Gameplay should always be framerate independent since you have no way of knowing how many frames the game will run at on all host machines. You can use delta seconds in “Event tick” to scale things by time. Delta seconds is the time it took to render the previous frame.

Higher framerate than the monitor can result in screen tearing where the screen displays an image that consist of a portion of two frames or more. The Vsync option on the GPU can lock the framerate to the screen but other solutions like Nvidia GSync and AMD FreeSync let the GPU work as fast as it can and adjust the monitor refresh-rate to match the GPU.

Locking the framerate as the developer of a game will no doubt be annoying to PC enthusiasts since they always try to squeeze as many frames as possible out of games with the hardware they bought.

I was doing some more optimization on certain things and the 90FPS is now going toward the 96FPS to 102FPS on certain areas that are not as populated
the previous was at about 90.01 FPS and so far I’m getting some good ideas on how to continue optimizing the game to score more fluid gameplay.

The Vertical Sync in the game is actually enabled by default and it is insanely lightweight and does not cause any issues to the gameplay the game
handles various graphical features nicely and with little hit to performance such as the CPU so players do not have to worry about Vsync its on by default
and wont cost much to utilize.

Vsync is lightweight in most cases and it can even improve performance with certain settings when it promises to send draw-calls at a fixed interval. Portable devices like laptops that suffer from overheating issues will also benefit from Vsync in many cases since the CPU and GPU will never waste performance on drawing things the monitor won’t benefit from.

That said some will avoid Vsync in order to get the highest framerate possible mainly in an attempt to lower “input lag” especially in fast paced fps games.

Relying on Vsync while programming is generally not a good idea since the player might notice the gameplay “slowing down” when the framerate is getting really low that in turn may effect immersion when time starts stretching. If you are using Physics objects they may also interact differently if it interacts with your logic that depends on the tick rate and not time.

Just something to keep in mind.

I honestly did not know that that just made this even more curious, I thought that having Vsync on was just a good thing especially when it costs them little to nothing
especially with this game.

I could try to play around with a Vsyncless mode to see what that does I’m actually quite cuirous about this.

“That said some will avoid Vsync in order to get the highest framerate possible mainly in an attempt to lower “input lag” especially in fast paced fps games.”
Yeah I forgot about that.

There’s no benefit to rendering more frames than your display can support. Most monitors run at 60hz so it’s fine to limit it to that, but it’s good to add the graphics option for a higher target framerate for the people that have monitors that support that.

Well that makes it less of a issue, I have been stress testing the games mostly the platformer at the moment and even with the highest settings it does
in most cases stay at high framerates due to some of the optimization I’m doing with it, the only bottleneck is the obvious reason the Grass mostly because
I tried to squeeze the foliage into the CPU but I think it wont do well since the CPU is very weak the GPU however can do the trick but I yet to find a GPU
method for grass to use for it.

I actually want to console the game instead so the 60FPS seems right however if keeping it on PC, if its on PC it will go through the Windows Store not Steam or Origin it must be through the Windows Store and the Windows Store only the racing game was already a console game so that’s that.

I ran across an article, that I wished that I had bookmarked, that showed an interesting performance curve relative to what I believe was refresh curve in ms and as the FPS approached the time in ms started to decrease to the point that the curve was more or less flat. You could say that it was like revving the engine but not going any where so from 40 to 60 there was a worth while performance increase that was almost double but to double after that you would have to hit 120.

The thing is over 60 is a feel good number where the returns are little and the excess is of little value that is converted into heat which can degrade performance over longish playing times. See the thing is if you uncap and ask the card to do 120 it will attempt to do so because you told it to. What ever the card can’t render is converted as I said into heat and card manufactures are smarter than their customers so they put in stuff into products, like auto throttle, that will automatically decrease the output of the card that most will see as latency, dropped frames, or as a visible stutter.
60 is more than enough and anymore is just showing off… unless you are one of those that can tell the difference between new and old Coke. :wink:

Well it was just a thought I just thought more Frames the better.

Its nice the game can go up to 100FPS with some variable but I sure as heck cant see the motion in action
so I will just lock my games to 60FPS the racing game is already locked at 60FPS.

Relying on Vsync can be a very bad idea. Many PC gamers will avoid it like the plague as the input lag is terrible. Of course it depends on the game but input lag is never a good thing so even less competitive, slow-pased games can feel terrible with it. I’m not usually that picky with games but I refuse to play any third/first person game with Vsync on and many are the same.

Also having higher frame rate than your display’s refresh rate isn’t a bad thing, it’s not going to cause more screen tearing. What causes tearing is if the frames doesn’t sync up (doesn’t happen if you use vsync, freesync or gsync). Also if your GPU can push out more frames than your display need, it will ensure the frame that is shown is more recent than when having a lower frame rate. This reduces input lag, so it’s definitely not a waste to be able to push more than your display can show.
Other than heat reduction or battery usage and such on weak devices there is no reason to not let it render more frames than the monitor can show. What can be beneficial is to let players set a max limit on FPS that is the lowest fps it drops to during gameplay. This ensures the fps is steady and input lag is constant which is helpful for competitive games.

More frames = less input lag so yes it is better however since you apparently built your game around the 60 fps mark it is probably best to just stick to that like most console games do.

This confuses me as it reads as if turning on Vsync generates screen tearing. When using any of the sync methods you avoid screen tearing. While not using any sync the frame buffer is read even if it is not completed and the monitor now shows a bit of 2 or more frames if the GPU fps is higher than the monitor refresh rate. G-sync and free-sync avoid this problem by not having the monitor request a new frame but instead the GPU requests the monitor to refresh when the GPU frame buffer is done.

I see your point however despite the game being able to show a large number of frames it still has stutter problems for example when I revolve the camera the FPS counter does infact read 60FPS but the game will stutter and give this weird pulling effect for a slight second this is I’m sure due to the CPU.

Here are the specs of the hardware I use for game production:

GPU: NVIDIA GeForce GTX 1050 (2GB of GDDR5 RAM) “Not overclocked I don’t like to overclock my GPU I don’t want it to blow up.”
CPU: Intel Pentium G2030 3GHz per core

Now there is a reason why I keep the CPU and that is for Optimization reasons when it comes to using hardware for optimization I prefer to use the weakest possible but at
the same time still capable of doing certain things like playing HD video etc, the game was originally started development with a AMD Radeon R5 230 than I switched to a
AMD Radeon R7 240 and the last AMD card was a Radeon R7 250.

Software optimization is something I can do just fine infact the 3D assets are all made with only the CPU with my own 3D software that I’m working made with open source
software I prefer to use my own over something like Autodesk because I have complete control over it and can make changes when ever needed overall its done with only the
CPU with little hit to performance even in 4K.

My games do a great job dedicating to the GPU and the only reason I would think the CPU would come to play is because of Grass everything else is all
well handled by the GPU the GPU has plenty of Horsepower that I have not even fully utilized and its all done with DirectX 11 this is crucial for me because
I don’t want to deal with DirectX 12 or Vulkan.

I am thinking about using Nvidia Turf effects to replace the textures grass foliage objects instead

I rewrote that part and didn’t read it through before posting. I’ve edited it now. I meant it the other way around.