Hi, this is my first topic so apologies if i am doing it wrong…
I have a Niagara effect that reacts to an ambient sound placed in the level via a Niagara Module Script (screenshot attached). I want to render the scene in sequencer or movie render queue but unfortunately when doing so the audio plays in realtime while the effect naturally takes a little longer to render meaning that the reactive effect is out of sync with the sound.
I am wondering/hoping that there is a way to bake (?) the audio so that it will render in sync with the effect… Or perhaps there is a way to extract the audio spectrum, feed it to the Audio spectrum node in the module script over the same duration as the song… would that work? and how might i do it if it would - I am new to niagara and blueprints so any and all help would make me eternally grateful!
Thanks for the reply, the problem with solution 1 is that the video effect is directly affected by the audio (the audio in game makes it move, reacting in real time) which is why when it goes out of sync the video effect is no longer reacting the audio and won’t render properly.
As for solution 2, do you mean a way to capture the spectrum data and play it back over the duration of the song, in effect, allowing the niagara to react to the music without requiring realtime and therefore being able to render for however long it takes and remain synced. If so, do you have any idea how? Even some buzz words might help that i could research into, thanks!
Any better solutions to this? Ive tried many different variations to this issue from many forum posts. Im currently working in UE5, but i imagine the error exists in ue4. Im using Nvidia shadowplay to screen capture since the delay is even worse in the movie render queue… there is literally no audio delay frame by frame, does anyone know why?.. its pretty standard in after effects… We have used a “catch up” workflow before in realtime to make sure our audio doesnt fall behind, during packaged builds. but when the reacitivity is reliant on a per frame basis, it doesnt suffice.
Is there a specific bitrate that people use to better align the audio to 60 fps… Or is there a way to update the sync inside the sequencer? I dont think the solutiuon is to involve BPs since it will do the same thing that the sequencer is doing already.
I was very optimistic about cutting up the 5 min track into smaller pieces but the overlap happens from the delay that accumulates on the cut that is finishing.
Im very sad that this issue is still occuring since the release of the audio analysis components introduced in 4.26. There is alot of realtime potential, but capturing reactivity isnt pretty much impossible in this state.
A solution that id like to not use, but had proven successful in other RT applications is to analyze in a separate application and use OSC to bring in audio values. But this would require more time than is worth investing in this project.
Please community! Let there be light in such dark times!
I was noticing that the audio plays back while using the movie render queue… its 100% not following in line with the frame capture as it does during playback in the editor. My mind want s to think thats broken but i didnt wait until the end result, and check its alignment in premiere.
Are you saying i should wait?
Thanks for the reply!
P.S. i assume youre refering to this tutorial?
or what are your thoughts on cutting up the track? i was having issues there too. overlapping
Yeah thats the one, wait and check. Always syncs for me now after that tip, but there is no sound during capture via movie render queue(for me at least), only when capturing via sequencer - make sure to use MRQ
cool. Since we are here. and im working in particles… i need to add a delay to get them in the proper state before the render begins. Are you familiar with setting a ticking buffer in the render queue?.. im really sucking at my terminology right now.
I cant remember if something like this exists on the niagara system already, but i think its in ue4, not seeing it in 5. something like start simulation at 100 frames after simulation begins…
A full week, and nothing from Support… Whats happening here? Is this a dead end? Do we look at audio as a not a real time consideration moving forward?
With the issue you may be facing for particle effects and MRQ, it isn’t unheard of to have warm-up frames in your render to allow world loading for foliage and even FX. A community member commented on a VFX freezing issue when using MRQ and suggests adjusting your particle emitter settings.
I hope some of these resources help and point you in the right direction for a solution to the issues you may be facing and give some extra insight into the features. Happy developing!
The only reliable option i found is to externalize the audio analysis and bring the data into Unreal via your favorite communication protocol… I like OSC… This way the systems stay reactive on frame the entire time. I am keyframing a custom event that sends out a message to start the audio track after my particles are in my favorite rest state. We can then hit play and let the magic happen. The screen capture works as expected, using shadowplay, im able to capture 2k - 60 fps no problem on the same machine. for 4k capture, we are using another machine with a cpature card.
This is not an uncommon method in RT video content, I was only hoping that there was a more internal solution. But until it is known, for those looking to stay away from render times, this is a solid approach.