In my vr game I want to have a screen displaying the in game action, filmed and using a 2d render target but this adds quite a lot to the GPU/render time and is proving challenging because of it.
So I was wondering if it somehow would be possible to stagger different parts of the render process to be executed over a number of frames instead of all of it being done on one frame?
The game being VR is running at 90fps but I would be fine with the in-game screen running at 30fps. But even if I render only every third frame I get the same hitch/peak in the GPU useage so that would be almost as bad as running every frame.
What if it was possible to, for example, do something like:
(sorry for the non technical descriptions)
Frame 1: Store geometry and do draw calls
Frame 2: Do tri/fragment hiding/visibility and depth buffer
Frame 3: Lighting+Shading. Display/produce the pixels.
Would something like this be possible to do?