Engine Version:
· Unreal Engine 5.x (observed in multiple 5.x versions)
Platform:
· Windows
Output Method:
· Movie Render Queue (MRQ)
Issue Description:
When rendering cinematics using Movie Render Queue, Unreal Engine does not capture Spatial Audio correctly. Spatial audio when rendered through the MRQ does not appear to be captured, even though spatial audio works as expected during real-time playback in-editor or in PIE. For instance, if an Unreal project relies entirely on spatial audio then the resulting .wav file generated from the MRQ will be silent.
This affects workflows that rely on:
· Spatialized audio sources
· Binaural / HRTF audio
· Ambisonics
· 3D positional audio intended for cinematic output
Expected Behavior:
Audio rendered via Movie Render Queue should match in-editor playback, preserving spatial positioning and spatialization settings as configured in the project (including listener position, camera cuts, and spatial audio plugins).
Actual Behavior:
· Spatial audio is lost in the final render
· Audio sounds centered / non-positional
· Behavior differs from Sequencer playback or PIE
Notes / Observations:
· This appears to be a limitation or missing feature in MRQ rather than a configuration error.
· Sequencer playback and real-time output behave correctly, suggesting the issue is specific to MRQ’s audio rendering pipeline.
· This is a major blocker for cinematic and virtual production workflows that require accurate spatial sound.
Question:
Is spatial audio support in Movie Render Queue currently unsupported or planned for a future update? If unsupported, are there recommended workflows or workarounds for rendering spatial audio intended for final cinematic output?
[Attachment Removed]