Run / Set Hardware Benchmark Best Practices And Other Questions

So I am finding it hard to find documentation specifically on this blueprint but how to the elements in Run Hardware Benchmark work (work scale, gpu and cpu multiplier)?

What are best practices when using this? Is it best to run this the first time the game runs or should I just keep it in the options menu?

Does this create any information that can be logged about the system the game is being run on? How can I access this information in game?

Also what is the best way of setting these things temporarily and asking the user if they want to keep these settings?

I’d also be interested in this. Specifically, targeting the benchmark to a specific framerate and then being able to set the results into a scalability setting.

For an AAA style result I would create a custom benchmark and step through different settings until the FPS is below a desired threshold.

Basically you take a 2m run of a cinematic sequence that you benchmarked before or that you know the basic triangle count/information of already.
with that sequence you adjust settings one at a time until you get to the least possible acceptable FPS avarage.

I’m not sure if the system you are referring to is some sort of built in tool that does something similar to this with the preset epic settings or what, however the end goal would probably still be to read out the stat fps value to a variable for comparison.

As far as things to tweak on/off when benchmarking the most drastic difference / benefit is usually anything to do with less triangles on screen and less transparent effects on screen.
Particles can add a lot of overhead for instance so if you are targeting lower end systems you would create a way to just globally disable some particles.

These obviously aren’t “presets” but things you would custom create and place into the custom benchmark.
could be useful to run the same game between mobile and PC, but that’s about it? Usually you just set your game to have a minimum hardware prerequisite for PC and release it “fully loaded” without options to disable much or anything since around 1990. Not that this is a good practice…

The work scale variable is particularly of most importance. Could someone please tell me how to do a benchmark check for a specified framerate (i.e. checking to make sure that FPS is consistently 90 FPS)? I have a crude work-around setup now, although the benchmark seems like the most precise method. It would be even more helpful to have simulated computers of different specs, so that the FPS check can be verified to work for approval processes for Oculus, etc.

If this question is still valid I might be able to help you out.

I’m certainly looking for this type of thing. Was hoping I could run the node inside an oculus quest, but the app crashes when I do with stock numbers, so I’m not sure what to put in here.

I am also looking for a definition or explanation of the variables in the Benchmark node (Work Scale and the multipliers)

  • What does the work scale influence? I have absolutely no idea as there is no information given…
  • I assume if I leave the multipliers at their default value of 1.00, it will determine the settings based on the full available capacity? So increasing the value to 2.00 should create settings that only use half of the users CPU and GPU?

A(n official) clarification would have been very nice…