By aligning three cameras horizontally and performing per-camera rendering to a single set of G-buffers, seamless rendering results are achieved by executing fullscreen post-processing only once.
Applying a Panini projection, taking into account the screen being divided into three regions, provides a natural wide-angle image.
At first glance, it seems to be working well, but there are some problems that I can’t solve by myself for a long time.
Temporal AA is not functioning properly. Jaggies can be seen in the side view, and the screen flickers.
Some form of lighting is not working correctly in the side view. It is likely that the lighting’s influence is based only on the front view.
Frustum culling is not working properly. When the view point changes, there are frequent moments where nothing is rendered near the camera boundaries.
I would like to receive advice as I haven’t been able to achieve the desired results with Temporal Anti-Aliasing, mainly because I have been mechanically replacing the ViewRect without fully understanding what it does.
I was able to fix the issue with the stretched clouds being displayed.
However, I haven’t been able to fix the abnormal shadows and the black side view.
Based on version 5.3, most of the issues could be resolved.
However, I’m wondering if it might not be necessary to stick to the approach of rendering these three cameras into the same G-Buffer and performing panoramic rendering in a single post-process. It might be sufficient to render the side cameras off-screen and use alpha blending to make the seams less noticeable.
Great work. I’m curious if Unreal Engine supports native real time cubemap rendering. I would imagine they might have addressed the outstanding issues in their native solution?