Apologies for the necro, but hope it helps as I was stumbling looking for an answer similar to you…
I can’t find any documentation or epic response on this. But looking at the source code, this is what I can deduce:
The code runs a CPU and GPU benchmark, at a level of intensity in relation to the ‘WorkScale’. Increasing this increases the system demand and accuracy of the benchmark. A value of 1 is a ‘trivial’ benchmark with very inaccurate results.
The numerical result of this is straightforwardly multiplied by the CPU and GPU Multipliers. Increasing these artificially increases the quality result from the benchmark test, increasing the apparent ‘system power’.
TLDR; if you have a demanding game and want the benchmark to be conservative, lower the multipliers. If you have a lightweight game, increase the multipliers.
I will test and update this post from my personal testing.