I’m currently working on a VR game we would like it to be possible to adjust the viewport of the left and right eye seperately, so that people who are cross eyed can adjust each eye so that they can play the game clearly.
I’ve tried to find out in the Oculus plugin code how each eye is rendered and where it would be possible to change so that I can expose an offset for each eye to a Blueprint, but with no success so far.
I found the FOculusHMD::CalculateStereoViewOffset method in OculusHMD.cpp, but I’m not sure if that is the right way to go.
Is there anyone who can point me in the right direction? Or if this is even possible?
A potential solution revolves around using stereoscopic materials to separately address the projection of each eye. Now not sure whether that could be helpful in your case.
CalculateStereoViewOffset refers to a vision model where the eyes look both in the same direction. If you have someone cross-eyed, you will have to change the direction one of the eye looks to, hence with a different projection. I think this would be anyway tricky to their brain since it is used to compensate for cross-vision and now you are simulating a “normal” condition. Not sure what the reaction would be (including potential simulation sickness).
Anyway, you can try to look into the code for Instanced Stereo Rendering, that’s where some of the VR “magic” happens when this mode is active and where the two instances, one for each eye, are created.
Very interesting project though. Would love to learn more about it. Let’s stay in touch.
Thanks for the quick reply Marco! I checked the linked thread, but I don’t think that’s what I’m looking for. I use something similar with a post processing material to show objects on only one of the eyes ( which is also part of the project ).
Ill check the c++ code to see if I can find out where the VR instances are rendered.
Looking at the implementation of IStereoRendering, I see that it is implemented in the Oculus plugin where get to the implementation of CalculateStereoViewOffset. Looking at that method, I’m not sure if it returns values to the camera’s for rendering or if it is used to offset the position and rotation of the projection matrix.
Correct my if I’m wrong, but shouldn’t I be finding a way to set the postion and rotation of the two “cameras” ( if they are even used ) separately before the render the frame? And if so, where would I find the code for it or the virtual function to override to solve this?
If that is the case, that is fine with me. Then the question is, where can I find the code that renders the views, so I can add the offset for each eye and possible add Blueprint variables to influence the offset at runtime.
Thanks @mordentral. I experimented with that function before, I figured it had to be that, but had strange results. But I guess I have to get a better understanding of the projection matrix. I also wasn’t reallt sure if that function was called before or after rendering the views.
Ill dig deeper into the projection matrix then and see if I can make it work.