For an AAA style result I would create a custom benchmark and step through different settings until the FPS is below a desired threshold.
Basically you take a 2m run of a cinematic sequence that you benchmarked before or that you know the basic triangle count/information of already.
with that sequence you adjust settings one at a time until you get to the least possible acceptable FPS avarage.
I’m not sure if the system you are referring to is some sort of built in tool that does something similar to this with the preset epic settings or what, however the end goal would probably still be to read out the stat fps value to a variable for comparison.
As far as things to tweak on/off when benchmarking the most drastic difference / benefit is usually anything to do with less triangles on screen and less transparent effects on screen.
Particles can add a lot of overhead for instance so if you are targeting lower end systems you would create a way to just globally disable some particles.
These obviously aren’t “presets” but things you would custom create and place into the custom benchmark.
could be useful to run the same game between mobile and PC, but that’s about it? Usually you just set your game to have a minimum hardware prerequisite for PC and release it “fully loaded” without options to disable much or anything since around 1990. Not that this is a good practice…
The work scale variable is particularly of most importance. Could someone please tell me how to do a benchmark check for a specified framerate (i.e. checking to make sure that FPS is consistently 90 FPS)? I have a crude work-around setup now, although the benchmark seems like the most precise method. It would be even more helpful to have simulated computers of different specs, so that the FPS check can be verified to work for approval processes for Oculus, etc.
I am also looking for a definition or explanation of the variables in the Benchmark node (Work Scale and the multipliers)
What does the work scale influence? I have absolutely no idea as there is no information given…
I assume if I leave the multipliers at their default value of 1.00, it will determine the settings based on the full available capacity? So increasing the value to 2.00 should create settings that only use half of the users CPU and GPU?
A(n official) clarification would have been very nice…