Here are some screenshots of some level I have with the stats.
It looks like I am getting anywhere between 300-500 draws each level. Some levels have up to 700, but those levels are rare.
Here are some screenshots of some level I have with the stats.
It looks like I am getting anywhere between 300-500 draws each level. Some levels have up to 700, but those levels are rare.
With those numbers, and the frame gen time youâve got there itâs probably overkill to use ISMs - thatâs actually a very nice render time!
Is there possibly another thing I can do to decrease GPU usage. It is currently at 50% - 60% at the moment, but I feel like it can be better. I am using a RTX 3060 12GB GPU as a reference point btw.
You could look at ScreenSpace GI rather than Lumen - it may shave some time off. Also, there are a lot of cvars you can tweak - theyâre dependent on engine version and game type so it can be painful but worthwhile researching those and testing them one-by-one.
Have you tested in a packaged build, or are these times from PIE? A packaged build will run significantly faster than from the Editor.
I can go ahead and package the game real quick and have some playtesters inform me on the performance. I will send an update on how it runs for them.
I donât think you can actually get an engine like Unreal that dgaf about performance and graphics to use less than 50% gpu.
We are lucky its not just 100% and a BSOD for âexisting" like Iâm sure the egg heads at epic decided it should be
.
That said.
DX10 vs DX11 is a good starting comparison. Youâd expect dx10 to load a little less. Forcing dx11 or 12 will change things.
if you want to do dx 11+ you should just get the engineâs maintained by Nvidia instead of using epicâs stock. They generally perform slightly better (and they definitely render better).
On benchmarking:
If you are hoping to benchmark dx12 you need to be aware that it is basically impossible since fps isnât reported and Epic made Unreal blatantly lie about actual timings.
Very few programs will work, and generally the ones that do have a cost to run, so
. Maybe set a custom FPS output in a widget using your own mathematical estimation from gametime / tick. The caviat is that you have to be sure the tick group is correct and that it isnât CPU bound in some way. Ultimately it will always depend more on CPU particularly if you use blueprint (which youâd have to in order to get a widget running).
Overall, you always package as a final built (without the engine debug stuff) and benchmark that version. Ideally you do this on the worse machine you wish to support.
Once you ensure the worse thing can run above 60fps with lowest settings you are kind of done worrying. Then, if players want to play on a 1060 with Epic quality itâs their fault for being stupid, not your fault for failing to optimize.
As a general rule of thumb, Iâd also try running the best machine I have on epic settings to see if itâs around 120fps at 4k or not. Probably not, and thatâs the engineâs fault really (see at start as to why).
Generally speaking, developers who care about performance arenât going to: A) use unreal. B) NOT build their own engine from source. And C) use any of the modern stuff that is known to kill performanceâŚ
I only have one computer sadly. It is slightly higher than mid-range and it can run my game on cinematic settings perfectly. I am more afraid about how people may see the game taking up all their GPU and try to start some âcontroversyâ with me, making up lies about the game. âThe game is a bitcoin minerâ or âClassic Unreal Engine unoptimized slopâ for example.
Focus on gameplay. If gameplay is good, people arenât really going to care.
Heck, I myself played that s*it called ARK for far longer than anyonr on earth rightfully should have - and yes, it may as well have been a bitcoin miner with its blocking of a top of the line gfx doen to 20fps like it did
.
For lowest possible system you could attempt a mobile build. Whaever cheap Android you can get that isnât considered âmid range" would probably do.