How to sync audio with video while rendering?

Hi, this is my first topic so apologies if i am doing it wrong…

I have a Niagara effect that reacts to an ambient sound placed in the level via a Niagara Module Script (screenshot attached). I want to render the scene in sequencer or movie render queue but unfortunately when doing so the audio plays in realtime while the effect naturally takes a little longer to render meaning that the reactive effect is out of sync with the sound.

I am wondering/hoping that there is a way to bake (?) the audio so that it will render in sync with the effect… Or perhaps there is a way to extract the audio spectrum, feed it to the Audio spectrum node in the module script over the same duration as the song… would that work? and how might i do it if it would - I am new to niagara and blueprints so any and all help would make me eternally grateful!

No one can help? I really really don’t want to have to screen record in order to get it to sync. The quality in comparison will be abysmal

Either:

  1. Render only the video, and add the audio in a video editing app (like blender).
  2. Delay the audio track until it syncs up with the effect.

Hi Midgunner66,

Thanks for the reply, the problem with solution 1 is that the video effect is directly affected by the audio (the audio in game makes it move, reacting in real time) which is why when it goes out of sync the video effect is no longer reacting the audio and won’t render properly.

As for solution 2, do you mean a way to capture the spectrum data and play it back over the duration of the song, in effect, allowing the niagara to react to the music without requiring realtime and therefore being able to render for however long it takes and remain synced. If so, do you have any idea how? Even some buzz words might help that i could research into, thanks!

Not what I meant, but what you suggested is a great idea. I found some nodes that are similar to the one you’re using in your graph: https://docs.unrealengine.com/en-US/BlueprintAPI/SoundVisualization/index.html. They let you sample audio at a specific time, so you can use that to keep it in sync.

Any better solutions to this? Ive tried many different variations to this issue from many forum posts. Im currently working in UE5, but i imagine the error exists in ue4. Im using Nvidia shadowplay to screen capture since the delay is even worse in the movie render queue… there is literally no audio delay frame by frame, does anyone know why?.. its pretty standard in after effects… We have used a “catch up” workflow before in realtime to make sure our audio doesnt fall behind, during packaged builds. but when the reacitivity is reliant on a per frame basis, it doesnt suffice.

Is there a specific bitrate that people use to better align the audio to 60 fps… Or is there a way to update the sync inside the sequencer? I dont think the solutiuon is to involve BPs since it will do the same thing that the sequencer is doing already.

I was very optimistic about cutting up the 5 min track into smaller pieces but the overlap happens from the delay that accumulates on the cut that is finishing.

Im very sad that this issue is still occuring since the release of the audio analysis components introduced in 4.26. There is alot of realtime potential, but capturing reactivity isnt pretty much impossible in this state.

A solution that id like to not use, but had proven successful in other RT applications is to analyze in a separate application and use OSC to bring in audio values. But this would require more time than is worth investing in this project.

Please community! Let there be light in such dark times!

Got it to sync finally, you need to have the sequencer fps set to 60 and then in movie render queue also set to 60. Weird but required.

Hey Ruigol!

I was noticing that the audio plays back while using the movie render queue… its 100% not following in line with the frame capture as it does during playback in the editor. My mind want s to think thats broken but i didnt wait until the end result, and check its alignment in premiere.

Are you saying i should wait?

Thanks for the reply!

P.S. i assume youre refering to this tutorial?

or what are your thoughts on cutting up the track? i was having issues there too. overlapping

2 Likes

Yeah thats the one, wait and check. Always syncs for me now after that tip, but there is no sound during capture via movie render queue(for me at least), only when capturing via sequencer - make sure to use MRQ

1 Like

cool. Since we are here. and im working in particles… i need to add a delay to get them in the proper state before the render begins. Are you familiar with setting a ticking buffer in the render queue?.. im really sucking at my terminology right now.

I cant remember if something like this exists on the niagara system already, but i think its in ue4, not seeing it in 5. something like start simulation at 100 frames after simulation begins…

ok i just put the buffer in the sequence. lets see how it goes :crossed_fingers:

unfortunately this did not work. the audio is not accurately effecting the system during record.
:cry:

although the recorded output looks very smooth

Can anyone from support speak to this please?

A full week, and nothing from Support… Whats happening here? Is this a dead end? Do we look at audio as a not a real time consideration moving forward?

Hey folks,

We have many audio-focused courses freely available on Unreal Online Learning.
This month, the Quartz Music System course was released. Some other audio courses; Audio-Driven Gameplay, Ambient and Procedural Sound Design, Sound and Space, Dynamic Audio, and Understanding Audio Mixing and Effects. There are some additional courses such as Blueprints and Gameplay for Game Designers that have elements within the course that could be of assistance.

Some other content that you might find of interest: MetaSounds and Quartz | Inside Unreal

It might also be of value to check out Dan Reynolds Audio; they have tonnes of audio-related content on their Youtube channel.

As for Movie Render Queue / Sequencer resources, the Inside Unreal broadcast on the Movie Render Queue Enhancements in 4.26 back in January 2021 has some great insight. As for the wider community, William Faucher has a Youtube playlist on Movie Render Queue Tutorials that I recommend.

With the issue you may be facing for particle effects and MRQ, it isn’t unheard of to have warm-up frames in your render to allow world loading for foliage and even FX. A community member commented on a VFX freezing issue when using MRQ and suggests adjusting your particle emitter settings.

I hope some of these resources help and point you in the right direction for a solution to the issues you may be facing and give some extra insight into the features. Happy developing!

1 Like

Thank you SkyeEden!!!

Ill be busying going through this during the weekend.

I appreciate your time compiling these resources, It means a great deal to the community.

1 Like

For anyone curious,

The only reliable option i found is to externalize the audio analysis and bring the data into Unreal via your favorite communication protocol… I like OSC… This way the systems stay reactive on frame the entire time. I am keyframing a custom event that sends out a message to start the audio track after my particles are in my favorite rest state. We can then hit play and let the magic happen. The screen capture works as expected, using shadowplay, im able to capture 2k - 60 fps no problem on the same machine. for 4k capture, we are using another machine with a cpature card.

This is not an uncommon method in RT video content, I was only hoping that there was a more internal solution. But until it is known, for those looking to stay away from render times, this is a solid approach.

1 Like