How can I detect what would be the hardware requirements for a game built with UE? The documentation only lists the requirements for developing with UE itself, not for packaged projects. What is the general procedure to detect the minimum requirements for a UE game?
If I export the same game using both UE4 and UE5, without any of the new UE5 goodies like Lumen, etc, using everything that’s also available in UE4, is the game exported from UE5 going to have higher hardware requirements or is it the same as UE4 or lower requirements than UE4?
I’m developing a low fidelity game, and I’d rather stick with UE5 due to all QoL improvements in the Editor, but at the same time I want to be able to reach lower end hardware, just like this UE4 game and this UE4 game which even runs on GT 640. So is it fine to use UE5 in order to accomplish low requirements, considering that of course, I am going to tweak the settings to accomplish that? Or for this use case I should stick to UE4?
I already built 2 proof of concepts where they use just 8% of my GPU and little to no CPU, but ironically, my computer is too high end for me to be able to test those situations, so I could be getting false positives and I don’t have a lower end device to benchmark.
In general, you will decide what your “minimum spec” is, and you will have at least one piece of hardware that is that minimum spec. You will then regularly test and profile release builds on that piece of hardware, to make sure it meets whatever your bar is. For fancier games, you will have more kinds of hardware, perhaps for each major generation of graphics and CPU hardware. For consoles, there’s also the console tech cert to consider.
It is generally not the case that one develops a game, and then one goes to measure “where does this game run OK?” Almost always, if you do that, it ends up being “my development machine is pretty much the minimum spec.”
To develop low-end games on UE5, use the “forward renderer” pipeline, and don’t use Lumen or ray tracing.
Ok, that makes a lot of sense, in the past that’s how I tested and it felt hackish/too homemade (especially when developing mobile games I’d run the game in 4 devices manually), in the sense that it did not really test all the requirements.
I’d imagine it would be common to build with a CI Pipeline that could run automated builds to test different presets of hardware.
I’m a bit confused by why some sources say that Deferred Rendering has more performance, and others say that Forward Rendering is the one that gives better performance.
From the sources, if I understand correctly, Forward Shading is better for lower end games because it uses less VRAM, but it’s slower and takes more resources during the initial loading?
Source 1
forward rendering or forward shading: While quite easy to understand and implement it is also quite heavy on performance as each rendered object has to iterate over each light source for every rendered fragment, which is a lot! Forward rendering also tends to waste a lot of fragment shader runs in scenes with a high depth complexity (multiple objects cover the same screen pixel) as fragment shader outputs are overwritten.
Deferred shading or deferred rendering aims to overcome these issues by drastically
Source 2
Forward Rendering provides a faster baseline, with faster rendering passes, which may lead to better performance
Source 3
Forward: This approach gets rid of the G-Buffer, saving GPU memory and making several thing easier (especially anti-aliasing). Don’t be suprised, though, that the cost of the base pass is significantly higher with forward.
If you’re going to use many light sources, then deferred rendering will perform better. (Not counting shadow maps – many shadow maps is always a problem.)
I guess one thing that goes implicitly with the recommendation to “use forward shading” is also “and only use a main scene light and maybe a fill/key light, and occasional effects” for that simpler game look and lighter rendering.