I’m currently working on a demo that has only static geometry without moving parts (aside from the occasional door)
Though i’m looking for a way to dynamically degrade settings depending on how complex the scene is (and if the camera/user is moving/rotating/translating), just so “any” computer can run the demo without judder/fps problems.
I thought about comparing values every frame to get delta values of the fps and rotation/translation of the headset and then triggering progressively aggressive feature reductions to maintain framerate while moving, and then when there is no motion detected enable more complex features to create a more beautiful image. Similar to what you see on forward render engines.
However, just the frame rate check every tick knocks off about 5 to 10 fps alone, so its actually worsening performance.
Is there a way to dynamically scale features/frame rates depending on if there is a need for them?
Is it the tick event delta check that’s expensive, or actually toggling on/off certain features?
If it’s the feature toggling that’s consuming time, you might consider adjusting “hmd sp” on the fly instead (or at least as a first step), as the change is very cheap (it’s already part of the Oculus frame buffer distortion), and more progressive / less noticeable (it changes gradually, versus just on/off).
See this excellent talk by Tom Forsyth at Oculus Connect, particularly the segment starting around 50 minutes where he discusses using this technique on the fly](https://www.youtube.com/watch?v=addUnJpjjv4#t=2993):
hmm thanks for that!
Thats an interesting video
I now have a few settings that trigger automatically as soon as a HMD is detected when going fullscreen