I haven’t gotten started with any more then theoretical ideas for this project.
But I have one question that is different and the examples I look at don’t seem to support.
As a default when working with the rift, you render the gameworld from each eye, transform the texturedata you get out to fit the data the rift expect.
According to examples most of this is done, you place one camera, set it up for Rift usage, and you get some of this automagicaly.
My problem, is due to the experiment I am doing, I need one camera per eye, that I can modify, do post processing and change render settings for indivually.
Is this possible to set up in an easy way using the current systems, or does it require me to handle the set up to transform on my own?