Audio sources will only be included in the outputted .wav if there is NO spatialization activated on their attenuation settings (regardless of where that attenuation is added at: sound, cue or map level). You can have all other forms of attenuation on but for some reason spatialization means sounds wont render.
If I try to render using the movie scene capture
The audio and visuals start rendering at the same time but the audio is done it real-time while the video is done as fast as it can render each frame. Problem is the audio is responding to changes happening in the video render so if my camera turns to the right in the visual render then all spatialized sounds shift to the left at that point in time, even though its completely out of sync with when that should happen in the scene
I don’t mind which one I use but just need a reliable way of doing this, I’m aware I could just use an internal routing software on my computer and record the output of unreal while playing it back in the sequencer window but I cant believe we are at UE5 and people would still have to do that.
Ended up speaking to a UE dev directly and you literally have to record the output via an internal routing system like ‘voice meter’ through to a DAW like ‘Reaper’ and then syncing the footage and audio up separately, very old school. Ridiculous I know. I wrote a little blueprint that created a ‘virtual clapperboard’ so there were sounds and colours flashing in sync at the start of my footage so I could easy match the frames with the waveform peaks.