I am working on an open-source UAV simulation where I am trying to capture and save images from a camera that’s attached to the drone. This picture gives an idea of how the game looks like. The rightmost view on the bottom is the actual view from the camera, the other two are just processed versions of the same image.
Right now the way it’s set up in this way: There’s a camera asset, which is read through the code as a capture component. The three views in the screenshot are linked to this capture component. The views are streamed without any problem as the drone is flown around in-game. But when it comes to recording screenshots, the current code sets up a TextureRenderTargetResource from this capture component, and subsequently calls ReadPixels and saves that data as an image. For some reason, this is slowing down the whole game by a lot: drops from ~120 FPS to less than 10 FPS when I start recording.
Looking at this question and this related article, they mention that ReadPixels() “will block the game thread until the rendering thread has caught up”. The article contains sample code for a ‘non-blocking’ method of reading pixels, but it doesn’t work for me yet. Are there any more efficient methods that can achieve what I am trying to do? I am a little confused as to why the code has no trouble streaming those images on-screen, but saving the images seems way more complicated.