I recently visualized a volumetric simulation in a CAVE (a room with 3D screens as walls) using nDisplay and, due to a bug, the clouds were rendered in mono for all the screens. It took some time for us to realize this, as the clouds are incredibly large. See below for the scale of a hurricane.
Volumetric clouds are very large and inherently blurry. The design of HMDs cannot differentiate between the two eyes for distances greater than approximately 100m, and even the human eye struggles to do so at those distances. As clouds are much further away, it may be worth trying to render the clouds only once and place the same render in both eyes. I believe that this would not be noticeable by the user, but it needs testing.
Currently, the performance of volumetric clouds in VR drops significantly, with a decrease of more than 2.5x when rendering the clouds. Implementing mono rendering may address this issue. This should also work even better for the SkyAtmosphere.
How would one go about rendering Volumetric Clouds and the SkyAtmosphere only once for VR headsets?