First things first: I am kinda new to the unreal engine, so forgive me, if I’m not 100% correct.
I am trying to make some kind of a stereoscopic 3d, depth of field supporting camera. The idea is to have two cameras, that are slightly offset from each other and to make them allways look to one special point.
This point is sitting on an infinite line, which is between the cameras. Now I want to move this point away from the cameras untill it hits solid geometry.
I would render these two images next to each other with a correct amount of lens distortion. After I got this setup complete, I could just Import it in any of my levels and youse the third party software TrinusVR.
Trinus streams the image data from the game to my smartphone, mounted in a google cardboard HMD, and the Phone sends the head tracking data to th PC, wich converts it into InGame camera movement.
The Idea behind the dual camera setup is, that the there is one camera for each eye and the Images would be slightly different, except for the part, you are looking at e.g. the one your “eyes” focus on.
I think the procedure would give you the most realistics depth of field, because its not calculated by the computer but is being produced by your own brain.
I already used this method wich i call DCSF (Dual camera stereoscopic focus) in other 3d applications, but i wanted things to render in realtime, so i dicided to use the UE4.
If you have any ideas how i could realize this Idea, tell me!