Announcement

Collapse
No announcement yet.

Audio Synesthesia

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Originally posted by renderman.pro View Post
    Haha, all audio guys in one topic. Did you try it with Niagara? @cannabis.cod3r
    Not yet, holding out for realtime analysis.

    saunassa Thanks for the tip, but re-routing virtual sound cards seems like a burden to put on end-users. I still think an audio capture component with WASAPI loopback mode would be ideal (for Windows obviously).
    Let me know if you want some user testing and/or Niagara visualizations for a demo (although I'd have to run it by my employer first).

    Comment


      #17
      Hey guys - I'm following the documentation here: https://docs.unrealengine.com/en-US/...sia/index.html

      I can't for the life of me get the example to work - I can't seem to find a version of the "Get Normalized Channel Loudness at Time" node that's blue with Exec pins

      I'm new to blueprints so I'm sure it's something obvious in retrospect hahaha - could anyone help me out?

      Comment


        #18
        JohnnyHalcyon It's likely you haven't set the variable type. Click on the "Loundness Analyser" variable and check the Details panel. Be sure the type is set to LoudnessNRT.

        Comment


          #19
          Originally posted by saunassa View Post
          Hey Everyone, I've been the main man working on Audio Synesthesia. Super glad to see you all interested! I want to give you some quick notes on what's available in 4.24 and what we have planned for the near future.

          Audio Synesthesia In 4.24

          Audio Synesthesia is in a pretty early beta stage. As it stands in 4.24 there will be support for baked analysis (aka Non Real-time) and it will generate:
          • perceptual loudness over time
          • constant Q over time (Constant Q is like a spectrogram except that the frequencies are spaced more like notes in a scale)
          • audio onsets over time
          The analysis assets keep references to the sounds that created the underlying analysis data, but there's nothing that forces them to be used together. In blueprints you can pick whichever asset you'd like to use as a data source and sync it to whatever sound you'd like, or not sync it to any sound at all.

          Audio Synesthesia In The Future

          Audio Synesthesia in the future will absolutely support real time analysis for any algorithms that can be done in real time. That means loudness and constant Q will for sure be in there. Possibly onests will make it to with some modifications from the baked version.

          It will also have some usability updates and new analysis algorithms.

          On my short list of analyzers to add are
          • Beat Grid - This is only a non-real-time analyzer, but it's great for syncing animations to musical clips. It will fire off regularly spaced blueprint events just like you were tapping your foot to the music.
          • Peak partial tracking - This will break down a sound into a handful of 100 or so tones with amplitudes and trajectories
          • A regular old spectrogram

          If you have any things you'd like to see in future version, let me know!
          Hi,
          This plugin would be very useful, but unfortunately it is not capable of using USoundWave at runtime.

          I would really like to use this plugin, since it is very well designed and made, do you know when it will be possible to use it for USoundWave, which were not included in the editor, but during the game (in real time)?

          Comment

          Working...
          X