Movie Render Queue does not capture Spatial Audio

Engine Version:

· Unreal Engine 5.x (observed in multiple 5.x versions)

Platform:

· Windows

Output Method:

· Movie Render Queue (MRQ)

Issue Description:

When rendering cinematics using Movie Render Queue, Unreal Engine does not capture Spatial Audio correctly. Spatial audio when rendered through the MRQ does not appear to be captured, even though spatial audio works as expected during real-time playback in-editor or in PIE. For instance, if an Unreal project relies entirely on spatial audio then the resulting .wav file generated from the MRQ will be silent.

This affects workflows that rely on:

· Spatialized audio sources

· Binaural / HRTF audio

· Ambisonics

· 3D positional audio intended for cinematic output

Expected Behavior:

Audio rendered via Movie Render Queue should match in-editor playback, preserving spatial positioning and spatialization settings as configured in the project (including listener position, camera cuts, and spatial audio plugins).

Actual Behavior:

· Spatial audio is lost in the final render

· Audio sounds centered / non-positional

· Behavior differs from Sequencer playback or PIE

Notes / Observations:

· This appears to be a limitation or missing feature in MRQ rather than a configuration error.

· Sequencer playback and real-time output behave correctly, suggesting the issue is specific to MRQ’s audio rendering pipeline.

· This is a major blocker for cinematic and virtual production workflows that require accurate spatial sound.

Question:

Is spatial audio support in Movie Render Queue currently unsupported or planned for a future update? If unsupported, are there recommended workflows or workarounds for rendering spatial audio intended for final cinematic output?

[Attachment Removed]

Steps to Reproduce
1. Create a level with spatialized audio sources (e.g., sounds moving around the camera or pawn and placed at different world positions).

2. Verify spatial audio works correctly in-editor or during Sequencer playback.

3. Render the same sequence using Movie Render Queue with audio enabled.

4. Review the rendered audio output.

[Attachment Removed]

Hi Amy, for step 1, can you detail for me exactly your workflow for enabling 3D audio in PIE? Are you using any plugins for spatialization? E.g. atmos, etc.

If you’re using a spatial audio plugin that uses object-based spatialization, it routes the audio outside of UE which means MRQ won’t render it.

MRQ is just a non-real-time render of the audio engine during the MRQ export so should be rendering the audio identically to the way it renders in-engine. The only difference is the audio renderer is asked to render specific frames that match up to the same time-delta in the game thread tick. So it should be *exactly* the same, assuming it’s getting all the data from Sequencer the same way.

I’ll investigate here to see if we can reproduce the issue, but it would be great if you gave me a very specific workflow. E.g. what attenuation settings you’re using? It might help too if you can provide a video of the audio playback in PIE and then ithe corresponding MRQ output.

[Attachment Removed]

[mention removed]​ Gwen has an updated video.

[Attachment Removed]

[Content removed] Gwendolyn would you like to reply and provide the exact workflow? Thanks!

[Attachment Removed]

Gwen’s reply.

Hi Aaron,

My name is Gwendolyn Clark, I’m one of Amy’s colleagues, and I’ve encountered the spatial audio and MRQ issue here at Imagineering as well.

Regarding step 1, my workflow for enabling 3D audio in the Unreal editor thus far has relied on the native configuration in Unreal 5.4 – 5.7.

I know I haven’t used any other spatialization plugins, such as Dolby Atmos, and I don’t believe my colleagues have either.

My process for setting up 3D audio in my UE5.6 projects is the following:

  1. I import a WAV file into the Content Browser, then drag it into a level to create an AmbientSound actor.
  2. Once the AmbientSound actor exists in the level, I set up the Sound Attenuation asset for it and save it in its own folder in the Content Browser.
  3. After the Sound Attenuation settings have been created for the AmbientSound actor, I return to it, set up its unique attenuation settings, and reposition it if necessary, etc.
  4. I then test the spatial audio in the editor (test if I can hear it)

In my tests, the attenuation settings don’t seem to affect exports from the Movie Render Queue, except for the “Enable Spatialization” flag. That said, I’ve tried various attenuation shapes and attenuation functions, et cetera. If the Enable Spatialization flag is unchecked for spatial audio, the spatial audio can still be heard in the output wav file. Still, its spatial properties are lost, defeating the purpose of using spatial audio and attenuation settings.

Typically, when rendering with audio from the MRQ and using spatial audio, any spatial audio on the MRQ’s output wav track is silent.

I’ve passed along a screen recording of a test case I created for Amy Kong, for your review. Thanks for your time.

Best,

Gwendolyn Clark

[Attachment Removed]

Thank you for the detailed steps & videos, we’re looking into it.

[Attachment Removed]