Hi guys,
I’m implementing OmniStereo (Omni-directional Stereographic Panoramas) rendering in Unreal using the per-vertex displacement approach.
Here’s my fork:
Remember to checkout the -omnistereo branch.
It’s written on top of the newly released 4.10 version, you should be able to rebase it to older versions.
It adds a set of OmniStereo properties on the SceneCapture/SceneCaptureCube objects as well as the Camera object, just check OmniStereoEnabled and set the correct eye separation, sphere radius and up vector. Reasonable values are 6.5, 500, (0,0,1) respectively. Currently the code only does one eye (positive eye separation = left eye, negative = right eye), you have to capture twice or use two SceneCaptureCube objects to capture both eyes.
My implementation uses a function in BasePassVertexShader to transform the WorldPosition for each vertex for stereo disparity. Just doing this will result in a lot of shadow and lighting problems, as unreal uses deferred shading and transform the screen positions back to world positions with some ScreenToWorldMatrix. To solve this, I added a additional GBuffer to store the original world position, and changed a few shaders to use the GBuffer instead of the ScreenToWorldMatrix for world position. It’s not complete yet, so you’ll see some shadow and lighting problems.
I’m new to the engine, is there a way to record a video frame by frame from the SceneCaptureCube?