Download

360 Panoramic Captures and Sequencer

Oct 27, 2021.Knowledge

Overview:

Unreal supports 360 image and video captures using the Panoramic Capture plugin. For video media, sequences are the preferred method for animation within the engine, and the Panoramic Capture tool can work with sequences, but there are limitations on the supported rendering features. This intro video, and it’s follow-up video cover the basics for rendering a sequence with the Panoramic capture tool.

Advanced Features:

The Movie Render Queue (MRQ) has features like tiled rendering, path tracing support, and high quality anti-aliasing, which would be useful to have with panoramic captures. Unfortunately, the Panoramic Capture plugin is based on the Cubemap Capture component while MRQ is based on rendering to an off-screen render target in a traditional 2D rasterization system, and combining these two systems would be non-trivial. As a result they do not currently work with each other.

This article suggests code changes that you could make to the Movie Render Queue to try creating panoramic images while still retaining the advanced features of the MRQ. Note: This is an advanced use case and will require a fair amount of C++ knowledge and general rendering knowledge. Also while we go into a fair amount of detail, this is not a complete guide. Some R&D on your part is required to implement this fully.

Panoramic captures are a non-trivial issue to solve in traditional rasterization pipelines. For a perfect panorama/projection all of the data that hits the ‘sensor’ would have to be focused to a single point in space. In the physical world this isn’t possible due to the need for the sensors and its surrounding optics. In the digital world this is possible, but is not something that is currently supported by Unreal, even with the Path Tracer.

The traditional rasterization pipeline is heavily centered around linear perspective based on a view frustumm, and all of Unreal’s post-processing effects (such as Depth of Field, Motion Blur, etc.) are built with this in mind. The Path Tracer aslo shares post-processing effects with the traditional rasterization pipeline.

When you see a panoramic image taken from a real location, it’s typically produced by a series of images that are distorted and blended together. Due to the real-world constraints of physical cameras it’s not possible to emulate a single point in space for all of the light to be captured from. This typically results in visible issues for objects near to the camera but is generally workable for things further away.

The Movie Render Queue could be coerced into producing a similar behavior to a real world camera which would allow a similar process to work. This article discusses code changes you could make to allow the MRQ to capture many images (from different perspectives) at the same time which would ensure that the world stays in sync across all views.

To utilize the Movie Render Queue for panoramic captures, you would need to:

Capture the images from the various orientations needed. The more orientations you capture from the less distortion in your final image but the longer the render times.This article discusses how to do this step.
Take the undistorted images and blend them together into a sphere, and then project that onto a texture map (either a cube or an equirectangular projection depending on your needs/target software). This is not covered in depth by this article as the math is quite nuanced, but there are third party software packages (typically meant for use with traditional cameras) that may help with the blending.

Capturing the Images:

There are a variety of concerns to look out for, such as the usage of screen-space effects including: screen space reflections, vignetting, lens flares, etc. If these are applied to each of the images you render then you will see artifacts in the final projection, (e.g. a repeating vignette). Depth of Field may also be an issue, especially for near objects. These settings can be turned off in the Post Processing Volume to avoid these issues. This may be automatable in code (via the ShowFlags data structure and console commands in your MRQ configuration) as well to reduce the number of issues that arise from users forgetting to disable these effects in the Post Process Volumes.

Assuming these issues are addressed, when implementing the actual capture multiple viewports will need to be sampled at a single slice of time. Unfortunately, to make this work with the Movie Render Queue’s advanced features (Spatial/Temporal sampling) we cannot use any of the built in Scene Capture Component types.

The MRQ needs to very precisely control the motion blur amount on a frame-by-frame basis, as well as control an internal “Frame Index” so that the sub-pixel jitter used by anti-aliasing is correct. Additionally the MRQ accumulates these samples on the CPU so it needs to schedule a readback from the GPU. There are a lot of moving parts with very specific sub-pixel values, so getting the benefits of the MRQ means forgoing the use of those built in Scene Capture Component types.

The most straightforward approach might just be to put six cameras into the Sequence, change which one is bound to by the Camera Cut Track, and then render the sequence 6 times. Keep in mind: the fewer orientations you capture from, the more distortion there will be in the final image.

From an MRQ perspective this should work (the MRQ is deterministic in its choices about frame indexes, etc.) and anything controlled by Sequencer should produce very similar results. However anything that is not controlled by Sequencer may produce different results each run so you may have issues with particles, physics, foliage movement, or anything controlled by the map time, etc. So in practice there’s a high chance that each render will produce a different result and it won’t work, unless you can ensure some form of determinism in these systems.

Blending Images:

After the images are rendered you will need to map every pixel from your target projection (equirectangular) to your source images (6 or more images arranged in a cube). The Panoramic Projection plugin may be a good starting point for code that does this projection remapping, but it is not a full solution.

The math in the Panoramic Projection plugin assumes that you have one image arranged in a pre-existing cubemap layout while the MRQ will be generating six (or more) different faces as separate images. You may be able to adapt this code for your usages, or consider using third party software for this.

Modifying the MRQ :

(This will require C++ experience, and an amount of R&D. We recommend familiarizing yourself with how these functions work in the “standard” case of the MRQ before attempting to modify them to suit the panoramic case.)

This article contains suggestions and lists specific functions that our engineers suspect will need to be modified or adapted for this use case. It is not a complete guide on what code changes are needed.

It’s possible to implement your own render passes in the MRQ without modifying engine code, and you can use this to automatically handle switching your cameras for producing the initial images. Each render pass can capture multiple images all taken at the exact same time in-game. Additionally, with the correct setup, you can maintain support for TAA and other effects that rely on history.

This assumes that you have created your own Render Pass class based on UMoviePipelineImagePassBase as a plugin in your game and correctly set up the dependencies on the Movie Render Pipeline plugin and linked to the correct modules (MovieRenderPipelineCore, MovieRenderPipelineRenderPasses).

(The code discussed below assumes Unreal 4.26 or later, which refactored some of the render pass code to make it easier to extend)

Creating a frame in Movie Render Queue is generally a two step process. In the first step a render pass must notify the system what images it will produce, and the second step fulfills that contract and actually provides those images. This allows for asynchronous processes that are spread across multiple frames, e.g. you can declare that you will produce 6 images for Frame 7, and then many frames later actually provide those images (tagged with metadata that associates them with Frame 7) and the output system will handle this correctly, preserving the correct order.

Your custom render pass will need to override GatherOutputPassesImpl to provide a unique name for each render (e.g. “FinalImage_Top” “FinalImage_Left”, etc.). This function is automatically called at the beginning of each frame, so you just need to add the names of the images that you will produce.

In the Panoramic case you would likely declare your intent to produce one image per side of the cubemap. All produced images must have a unique name. Failure to correctly implement this declare/fulfillment process will result in no frames being written to disk, as the system is still waiting for part of a frame to be finished before it can output it to the output systems. Frame ordering is very important for video codecs so the system will wait until all image data is available for a given frame before moving on.

When we render the player’s view in the new MRQ, this function is called to calculate the view matrix used by the renderer. There is already example code demonstrating rotation of the view matrix by 90° in:

FSceneView* UMoviePipelineImagePassBase::GetSceneViewForSampleState(FSceneViewFamily* ViewFamily, FMoviePipelineRenderPassMetrics& InOutSampleState)

You can re-implement this function, and add a parameter for passing in an index to determine the rotation to apply to the camera. Since this function is not virtual you would need to override and re-implement:

TSharedPtr UMoviePipelineImagePassBase::CalculateViewFamily(FMoviePipelineRenderPassMetrics& InOutSampleState)

When an image is rendered an FViewFamily is calculated and then passed to the renderer to render. You can see an example of this in:

void UMoviePipelineDeferredPassBase::RenderSample_GameThreadImpl(const FMoviePipelineRenderPassMetrics& InSampleState)

TSharedPtr ViewFamily = CalculateViewFamily(InOutSampleState);

FRenderTarget* RenderTarget = GetViewRenderTarget()->GameThread_GetRenderTargetResource();

FCanvas Canvas = FCanvas(RenderTarget, nullptr, GetPipeline()->GetWorld(), ERHIFeatureLevel::SM5, FCanvas::CDM_DeferDrawing, 1.0f);

GetRendererModule().BeginRenderingViewFamily(&Canvas, ViewFamily.Get());

// Readback + Accumulate.

PostRendererSubmission(InOutSampleState, PassIdentifier, GetOutputFileSortingOrder(), Canvas);

You may want to make a new image pass which derives from UMoviePipelineDeferredPassBase, and override RenderSample_GameThreadImpl. Within this overridden function, enter a loop that calculates the view six (or more) times and submits those different renders to the GPU, scheduling a read-back after each one. You will need to provide a different FSceneViewStateInterface and UTextureRenderTarget2D for each one (so that their histories stay separated).

If you examine the MoviePipelineDeferredPasses.cpp implementation in 4.26 you can see the newly added “Stencil Layer” feature. This renders the world once per user-defined layer (and keeps their histories separate) so you can follow this implementation fairly closely. You will just need to re-implement the functions mentioned above as they cannot be inherited as it currently is, or you can just change the engine code.

This should allow you to get six or more images for each camera while supporting post processing effects. As an added benefit the high-resolution tiling feature should work (though it doesn’t support some post processing effects) which would allow you to capture very high resolution cubemap faces.

1 Like