How to Improve Frame Rate Through Video Settings

My friend’s desktop running a 5770 can run TPS at 60fps1080p. You’re definitely right.

I find that my game is GPU limited (got a GT630M) and get the best fps when I play as Standalone from editor.

So I’m curious, as a PlayStation “fanboy”, what would be comparable with PS3 quality, or PS4 quality?

Those consoles got 60 fps games running at HD resolution, right?

Have you tried to make sure the AMD Radeon is activated with UE4?
I never used AMD, but on NVIDIA drivers there was a use “High Performance Nvidia processor” in global settings, and not the integrated Intel HD one.

Have you tried to make sure the AMD Radeon is activated with UE4?
I never used AMD, but on NVIDIA drivers there was a use “High Performance Nvidia processor” in global settings, and not the integrated Intel HD one.
[/QUOTE]

I found out that you can look in Saved\Logs & search for FSynthBenchmark and it will say the name of the GPU

PS3 games were largely 30fps at 720p upscaled, though some were native.

PS4 games are mostly 1080p, though some are less. While many games run at 60fps, 30 is still the norm.

Any GPU from the last five years outperforms PS3. PS4 even is quite modest in power, so most mid-to-high range GPUs would best it.

Thanks for that info Jared Therriault!

I have seen that Ninja Gaiden on PS3 was at 60 fps @ 720p, and DmC is 60 fps @ 1080p on PS4. But maybe those games uses some kind of dark-magic API for the GPU that is using optimizations not possible on PC?

PS3? Pretty much any integrated graphics card with a powerful Intel i7 CPU. Weird setup, but that’s the closest you can get to Cell’s ridiculously overpowered, ridiculously paralleled CPU architecture. Most integrated graphics cards will exceed the 4.4 Gp/s fillrate for pixel shading, and you will not find a computer nowadays that offers less than 256 MB of RAM. I think some of the higher-end integrated graphics cards are better than Xbox One quality. 25.6 Gp/s fillrate is what you need to surpass the PS4’s pixel shading performance. A GTX 560 ti will easily outdo it, and most likely the GTX 950 as well. 8GB of RAM is not hard to find. The PS4’s CPU however is a lot different than the Cell: a very slow 1.6 Ghz i7 (don’t think that exists) with boost capabilities comes close to replicating it.

Be careful when you say “this game runs 1080p 60 FPS on the PS3, it should run like that in UE4,” because UE4 uses an entirely different lighting system that’s a lot more difficult to process. It’s based on reflections instead of a basic direction to a colored light source. If you don’t have the power to provide reflections for the map, you won’t be able to light it as quickly. So, just a good comparison to make, Super Smash Bros. 4 runs on the Wii U in 1080p 60 FPS but with outdated phong shading, and Mario Kart 8 runs 720p 60 FPS with UE4-like physically based rendering, and absolutely no screen space reflections (I think MK8’s kart/character reflections are pulled from a scene capture that updates once every 65 frames, while the environment uses a generic one-size-fits-all ambient reflection). Both of those games run without anti aliasing, or any of UE4’s advanced post processing effects.

UE3’s Phong shading:


UE4’s physically based rendering:

https://docs.unrealengine.com/latest/images/Engine/Rendering/Materials/PhysicallyBased/roughness_metal.png

I was confused and felt like that my old PS3 outperforms my test/demo running on my laptop (i7 & GT630M) that did cost twice as much. But now I understand that UE4 uses completely different shading & post effects.

Thanks for the clarification!

Some games definitely were able to get great performance that other games did not, but it depends on a lot of factors - post-processing, lighting accuracy, number of objects, number of lights, number of vertices, vertex deformations, shader instructions, physics, and game code. Ninja Gaiden Sigma was an original Xbox game before it went to PS3, so aside from the extra specular effects, more voluminous particle effects and increase in resolutions, it wasn’t very hard on the PS3. DmC was 30fps last gen, so 60 this gen is to be expected :stuck_out_tongue:

As mariomguy said, previous generations tended toward Phong shading that could “fake” a physical look with the right settings, but most modern engines use PBR now. It looks much more physically accurate without the need for the artist to fake anything, but that comes at a cost. But this in turn decreases development costs, so there’s greater advantage to using PBR in some areas.

Yeah, if you want to make a game run fast in UE4, you’ll have to make some adjustments: instead of using dynamic or stationary lights, use static lighting for everything. You can have some generic lighting for everything in the game instead of specific specular reflections. If you set the max roughness for screenspace reflections all the way down to 0, then nothing will have screen space reflections and it will all either update from the reflection environment or a generic ambient cubemap. Don’t use too many heavy post process effects: that costs more. Don’t use too many objects or polygons. And run your game at a sub-HD resolution using the screen percentage in Misc post process settings. If you do all this, I guarantee you UE4 will run fast, fast enough to look great on mobile. But you’ll also be missing a lot of nifty features that makes UE4 so prized as an artist’s engine.

The PS4 is able to run games well due to the fact that it uses GNM and GNMX, with the shader language being PSSL. Basically, it is much lower level than DirectX 11 and even DX12. Additionally, the developers only need to target one single set of hardware as they know that if it runs at a set level then it’s pretty much guaranteed to run the same on every other machine. Unreal Engine 4 uses modern rendering techniques that are more akin to realistic physically-based rendering pipelines.

You can’t really find comparable hardware for the PS3 as it has a massively powerful custom IBM CPU that does a lot of the processing for the system, even taking over the GPU. Think of it as something GPGPU does for the CPU, except in reverse. Compression algorithms and coding allows developers to stretch a lot of potential out of the PS3’s hardware, which is how it can run very good looking games with only 256mb RAM allocated for games and the rest for the OS. The PS4 relies on GPGPU for CPU-based processing and its GPU is equivalent to an R7 260X, however with its heterogenous architecture it is able to have a 5GB VRAM buffer for game assets with the rest going to the OS. Additionally, the PS4 shares the same sort of compute tech that the R9 290X has (asynchronous compute units) which helps with GPU rendering and GPGPU.

There are some i7 CPUs with more threads and comparable clock rate, as well as significantly more flexibility between cores and shared cache memory. Remember, each SPE only has access to 256 KiB of data, roughly 275 KB. The Wii U by comparison has 32 MB of eSRAM shared between CPU and GPU: that’s 116 times more memory available to the vast majority of CPU processes, and it’s shared. So the PS3 can process very quickly, but not for a lot of data (or very accurate data) while most other consoles and computers have more flexible shared caches between cores. Only the PPE on Cell is capable of accessing system memory (RAM). The seven other SPEs can only access and process 275 KB of data each, and from my understanding it’s not shared.

The PS3’s Cell was incredibly powerful for the time, I mean, no PC was capable of simulations like Flower, Journey, or Little Big Planet, but nowadays a lot more physics and particle processes are moving to the GPU which is much faster and much more capable of handling simulations like this, assuming a boost in cache is possible. And CPUs have also exceeded the Cell in terms of raw processing power in consumer machines as well for at least the past three years. Additionally, the limitations with memory and graphics power on the PS3 are the most debilitating factors. So, if you want comparable graphics, an integrated card will work, and if you want comparable simulations, a newer, lower-end dedicated graphics card will work, and if you want comparable CPU power with something much more flexible, a mid-grade i7 card will completely supersede any benefit the PS3 could’ve ever given you, and with more cache to spare. And if you just don’t care about price and want to put the Cell to shame, try a Xeon E5-2699 v3 processor with 18 cores, 36 threads, and 45 MB of cache. Sorry, but Cell is no longer the supercomputer that Sony touted it for 9 years ago. Technology marches on!

I’m only saying that what surprised me is that a game running on a PS3 managed to, at least in my eyes, outshine my untweaked “out of the box” test running on what I think is a somewhat acceptable laptop.

But Unreal 4 scales from phones all the way up to movies, just need to know what settings will give the right look for a certain hardware configuration.

So my question earlier was perhaps a bit ambiguous, but I wondered what settings would make it look like a PS3 game or PS4 game. But since it uses a different lightning model than the games on PS3 this might not be easily answered.

  • Don’t use dynamic or stationary lights: switch them all to static. This disables all GGX specular rendering, and it will significantly improve performance. You can use a static skylight and the lightmass environment color to provide a generic GI fill in the shadows.
  • Under Screen Space Reflections in post process, set the max roughness and quality to 0. This should disable Screenspace Reflections entirely.
  • Plug in an ambient cubemap to provide generic lighting. Reflections and lighting won’t be appropriate, but there are many games that do this and still get praised for their graphics.
  • Set Lens Flare intensity and size to 0. Set Ambient Occlusion intensity and radius to 0. Set the anti-aliasing method to FXAA. This should eliminate some of the heavier post processing effects.
  • Set Bloom intensity to 0. Set Auto-Exposure min brightness and max brightness to be the same value (1 is good for default). These effects are not very heavy, but you can disable them if you want.
  • Use a lower game resolution. 1280x720 doesn’t look bad. For lowering the resolution while working in-engine, use the resolution scale in the engine scalability settings, located to the left of the blueprints icon. To lower the resolution in the middle of a particularly demanding portion of the game, use the screen percentage post process setting.

If you start a project with these settings, You would have disabled most of the default effects in UE4. If this doesn’t run well above 60 FPS on your computer, then you seriously need to get yourself a real graphics card! From here, you can slowly bring in some of the more advanced effects and start to use better lighting/reflection rendering while increasing the resolution until you find a balance that works well for your intended hardware.

Can you guys tell me if the specs of my graphics card is really crappy or not?
I get pretty good speed in editor play too but standalone is verrrry sluggish.

Name Intel(R) HD Graphics 3000
PNP Device ID PCI\VEN_8086&DEV_0116&SUBSYS_15821043&REV_09\3&11583659&0&10
Adapter Type Intel(R) HD Graphics Family, Intel Corporation compatible
Adapter Description Intel(R) HD Graphics 3000
Adapter RAM (2,084,569,088) bytes
Installed Drivers igdumd64.dll,igd10umd64.dll,igd10umd64.dll,igdumd32,igd10umd32,igd10umd32
Driver Version 9.17.10.3347
INF File oem3.inf (iSNBM0 section)
Color Planes Not Available
Color Table Entries 4294967296
Resolution 1366 x 768 x 60 hertz
Bits/Pixel 32

Any integrated graphics is going to be closer to mobile hardware than higher-end PC. The chips are very small, they run with almost no energy straight off the motherboard, and are always passively cooled. All serious disadvantages if serious performance is what you’re after.

Intel’s HD Graphics 3000 is not even good enough to handle 8-year-old games at HD resolutions. A GTX 750, even passively cooled, will run circles around it, and only require 55 watts to do so. Heck, it’ll even render the engine better than my GT 640, which can run medium settings in 720p at 60 FPS! NVIDIA seems averse to releasing lower-powered graphics cards now that they can get amazing performance out of extremely efficient cards like the GTX 750 (and in the mid-higher end, 960), so there’s very little reason to not spend the few extra dollars and get something that is many times more powerful and runs on practically any power supply you give it.

Oh, and 2 GB of RAM was considered a laughable amount for the last 6-8 years. You need at least 8 GB of RAM and an i7 recommended minimum to run lightmass.

Hi sir, i have read your article and download a fps booster here you can see:http://www.videoconverterfactory.com/tips/improve-video-quality.html。 But here is the question, my original frame rate is 29, and i increase it to 60. I did not find any difference between the 2 videos. I wonder how can it be true for those frame rate booster since the number of pictures flashes by per second never changes. How can it insert extra 41 pictures into my 29 fps video and make it 60?

Probably the wrong thread (you’re talking about videos?) but if it works anything like SVP it’s frame interpolation done using some fancy algorithm. The difference between 30fps and 60fps should be pretty easy to see assuming your method works. Here’s an example: Frame interpolation overview - SmoothVideo Project (SVP) - frame doubling interpolation

Awesome thanks for this