Announcement

Collapse
No announcement yet.

State of Audio in 4.25 - April 2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • replied
    Hi John, Yeah that particular case wasn't the use-case for Synesthesia. As I said, Synesthesia is intentionally decoupled from SoundWaves. The motivating use case is primarily music visualization, which doesn't have to contend with tons of variations.

    But as I said, we DO have a baked solution that associates analysis data with sound waves. In that case, you can get delegates from an audio component that tells you what sound wave played and what the analysis data was. Check out the Analysis category for sound waves. It has both FFT and Envelope baking options.

    In that case you can call HasCookedAmplitudeEnvelopeData() and HasCookedFFTData() from audio components to determine if there is baked data:

    Then you can call GetCookedFFTData() and GetCookedEnvelopeData() to retrieve the data in BP directly as the sound plays.

    In your case, based on the other posts you make, you are interested in amplitude envelope data. You can do this without cooked data by just using the OnAudioMultiEnvelopeValue and OnAudioSingleEnvelopeValue audio component BP delegates. These will give you the realtime amplitude envelope.



    Leave a comment:


  • replied
    Thanks for your reply, Minus_Kelvin . That's a ton of great info! I hope you don't mind but I'm going to reiterate my first question because I feel like to maybe wasn't being clear enough:

    In a situation where I'd want to use a soundcue with a random node pointing to many different wav files, at runtime, when the wav is randomly selected by the soundcue, what would be the best way to determine what wav was selected so I can then select the correct associated NRT file to use to drive other blueprint data?

    For instance, let's say I have three explosions being selected buy a Random node and they are named thus:

    BOOM_01
    BOOM_02
    BOOM_03

    Let's also say I have these associated NRT files:

    BOOM_01_NRT
    BOOM_02_NRT
    BOOM_03_NRT

    The sound cue is triggered, and BOOM_02 is played. Where can I grab that info so that I can also play BOOM_02_NRT with Synesthesia at the same time?

    I hope this questions makes my conundrum more clear... I love the decoupling of the analysis files from the wav files--it's a good choice

    -jt

    Leave a comment:


  • replied
    Jimmy pointed this comment out to me -- I didn't see it in the forum post here, apologies for the late reply!

    If I wanted to setup a random playback from a list of wavs, what would be the best way to pair the selected wav file with its NRT Loudness file? A Custom struct maybe? What would setup for something like this look like?
    Yeah, we decided to intentionally decouple the analysis product of synesthesia from the USoundWave asset on purpose.

    The idea is that you may want to load the analysis file independent of the associated audio file. Or, conversely, you may want to *hear* a single file but use many audio assets purely for analysis and never load or play them ever. For example, you might break up a piece of music into different stems or filter or process the audio file differently depending on what you want to analyze. Then at runtime you play a single music file that is the mix you want to hear, but use any number of synesthesia analysis files you want in whatever way.

    Incidentally Synesthesia isn't the first analysis feature we added. We DO have a built-in analysis thing in our USoundWave asset -- we've had it for over a year! Check out USoundWave properties and look for the "Analysis|FFT" and "Analysis|Envelope" categories. You have been able to have baked FFT and envelope data with sound waves for a while. We use this on Fortnite for driving our "audio reactive" emotes. I talked about it at my GDC talk in 2019. I implemented this baked analysis thing in about a 3 days before a hard deadline when it was determined by important people that the emotes for a particular release/promotion needed reactivity from audio.

    We learned from hard experience that baking the analysis with the audio file was a BAD idea. Having the baked product IN the audio file is bad news and we've regretted it since. So when we set about making a more serious audio anlaysis plugin (and after we had hired some serious DSP people who know this domain very well), we intentionally decoupled it as I described.

    As for the best way to organize the data, it's up to you. We treat BP as a real scripting language with native UE4 unlike WWise where things are more "blackbox" and "turnkey". It does mean there's a bit of onus on learning and utilizing BP to build the thing you want to build. Basically there's a kind of intermediate API layer where we make the thing do the thing that is really hard (like all the work of DSP analysis) and present it in a way where the typical BP scripter can go to town building really cool stuff. To do otherwise would be to fundamentally restrict the application of the tech and limit use-cases. We *could* make some turn-key example content or plugins but usually we're pressed for time as we are developing these tools for use with a very fast-moving (production-wise) game. We've used Synesthesia in exactly the use case I described for the Fortnite live event with Travis Scott for example. We're using it for a bunch of stuff for Party Royale, etc. We have some very capable technical sound designers and a dedicated gameplay audio programmer who are working on Fortnite-specific tools and integrations.

    To give you some more direct and concrete advice. If you wan to make something like a straightforward thing that drives stuff based on analysis files, I'd say that what you suggested is definitely a way to go. Make a BP struct (or BP class) that has the sound you want to play and the analysis files you want to associate and write some methods or functions to play the sound and then trigger visuals or do whatever delegates/callbacks you want to interact with other systems (gameplay, physics, vfx, etc).

    Finally, on this topic, you may be more interested in real-time analysis vs baked analysis in general. There is yet another method for analysis you can try out with both AudioComponents and with Submixes. Both allow you to do realtime analysis of audio amplitude envelopes, which it sounds like you are interested in. They don't do the perceptual loudness stuff that Synesthesia does, but it does use the same envelope follower DSP algorithm that would be used in a dynamics processor (compressor, etc). Those are all reatlime and very very easy to hook into things like driving visuals (lighting strikes, etc).

    What is the purpose of the AudioAnalysisBank, NDIAudioAssetBank, and NDIAudioAssetInfo files?
    I'm looking in the code and I don't see these files. NDI stands for "Niagara Data Interface" so my assumption is these are related to the Niagara work. I knew this was going to be confusing, but the realtime visualization stuff we did for the Niagara Data Interface stuff wasn't done in the Synesthesia Plugin. It's technically not part of Synesthesia though it uses some of the same DSP analysis algorithms, albeit in realtime. The NDI stuff was done very late before 4.25 released and we didn't have time to package it within Synesthesia, which is where we *intend* to put analysis features, real-time and non-realtime. I want to refactor it at some point but it's been deprioritized.

    I'll ask the people who worked on the NDI stuff if they know about these asset types you are referring to.



    Leave a comment:


  • replied
    sadness...

    Leave a comment:


  • replied
    bump...

    Dannthr any chance you could give me a hint here? Thanks!

    Leave a comment:


  • replied
    Hello! Thanks for the great stream and the project files. I'm very excited about implied possibilities. A couple of questions:

    What is the purpose of the AudioAnalysisBank, NDIAudioAssetBank, and NDIAudioAssetInfo files?
    If I wanted to setup a random playback from a list of wavs, what would be the best way to pair the selected wav file with its NRT Loudness file? A Custom struct maybe? What would setup for something like this look like?

    I messed around today and got a cool effect going where the NRT Loudness from some thunder sfx is driving the intensity of a directional light to create a lightning effect--humble, but a cool first go. Having to pair the NRT file with the correct wav is a stumbling block for sure though... Thanks for any insight or tips you can give!

    -jt
    Last edited by JohnTennant; 06-22-2020, 06:11 PM.

    Leave a comment:


  • replied
    Originally posted by VictorLerp View Post
    You'll find an updated version of the project files here. The link can also be found in the announcement post.
    Thanks Victor.

    Leave a comment:


  • replied
    You'll find an updated version of the project files here. The link can also be found in the announcement post.

    Leave a comment:


  • replied
    Originally posted by Markus68er View Post
    Hi,

    Really enjoyed the stream. Thanks a lot for all the given information. I am wondering when the examples you showed will be available.

    Best,

    Markus
    Same, also wondering when the examples will be available?

    Thanks

    Leave a comment:


  • replied
    Hi,

    Really enjoyed the stream. Thanks a lot for all the given information. I am wondering when the examples you showed will be available.

    Best,

    Markus

    Leave a comment:


  • replied
    Originally posted by Derjyn View Post

    Looks like you have a virus, potentially, and that's on your end most likely:



    https://www.microsoft.com/en-us/wdsi...32/Beastdoor.S
    https://www.microsoft.com/en-us/wdsi...32/Beastdoor.L
    Time to make games for Linux :P

    Leave a comment:


  • replied
    Originally posted by DevJMD View Post
    I do have an issue, though, and I'm sure it's just a false-positive in my AV.
    Looks like you have a virus, potentially, and that's on your end most likely:



    https://www.microsoft.com/en-us/wdsi...32/Beastdoor.S
    https://www.microsoft.com/en-us/wdsi...32/Beastdoor.L

    Leave a comment:


  • replied
    Originally posted by VictorLerp View Post
    Hey all, just want to let you know that this stream will happen at a later date. We will update the title and thread once a new date has been set.
    Thanks, was just looking for it on YT and was wondering whether the stream got postponed or canceled.

    Leave a comment:


  • replied
    Hey all, just want to let you know that this stream will happen at a later date. We will update the title and thread once a new date has been set.

    Leave a comment:


  • replied
    Originally posted by g1i7chp37s View Post

    depends on replication settings yeah?
    No, there's only one listener locally. Sound playback is processed locally, the server only sends order what should be played.

    Leave a comment:

Working...
X