I’m looking into generating high-quality 360° stereo VR video and need more granular control over the rendering process. Specifically, I’d like to programmatically control primary rays so that I can render only the angular slices (or “slivers”) I need, rather than rendering the entire frame or full 360° sweep.
The current Stereo Panoramic Capture tool and similar plugins allow adjusting the number of steps and sweep angle, but they still rely on rendering full sub-frames for each slice. What I’m looking for is something closer to programmable primary ray dispatch, where I can define which rays are generated and which are skipped, ideally at the shader or render pass level.
- Does Unreal Engine (or the Path Tracer) expose any API or hooks for controlling primary ray generation?
- Is there an existing approach or plugin that supports partial ray dispatch for 360 stereo rendering?
- If not, would modifying the ray generation shader in the Path Tracer be the recommended route?
Any guidance or best practices for implementing this would be greatly appreciated.
Unfortunately this is not currently exposed in an API or setting for programmatic ray control. This has now been logged as a feature request for future consideration but no current roadmap plans to support.
If you want to patch the engine itself specifically for the Path Tracer, you can modify the routine CreatePrimaryRay in RayTracingCommon.ush but this is a private shader file. The deferred rendering path is more involved and not properly scoped.
Please note, however, that the native Stereo Panoramic Capture tool does not support Path Tracer, only deferred rendering. The third party plugins that support stereo 360 rendering with Path Tracer are mainly doing it through stitched multi-view rendering, so not sure whether the shader file modifications mentioned above would play cleanly with their solutions.
It’s also worth noting that other parts of the engine rendering pipeline may not recognize the shader modifications, so other aspects like texture footprint math, derivatives in the shader or other camera projection related operations will likely be wrong and could produce artifacts. Same for Nanite streaming, etc… so it is a bit of a can of worms. For example, its a bit more subtle than just changing the primary ray generation because of how autodiff works in shaders (the derivatives implicitly need to account for the camera projection).
Thank you for the reply [mention removed]. I appreciate this being dropped in as a feature request. Please keep us updated should this feature find itself on a roadmap.
Thank you!
Yes definitely will do! Closing this ticket for now but will touch base when the conversations progress internally.