Announcement

Collapse
No announcement yet.

Audio Synesthesia

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • replied
    Originally posted by renderman.pro View Post
    Haha, all audio guys in one topic. Did you try it with Niagara? @cannabis.cod3r
    Not yet, holding out for realtime analysis.

    saunassa Thanks for the tip, but re-routing virtual sound cards seems like a burden to put on end-users. I still think an audio capture component with WASAPI loopback mode would be ideal (for Windows obviously).
    Let me know if you want some user testing and/or Niagara visualizations for a demo (although I'd have to run it by my employer first).

    Leave a comment:


  • replied
    cannabis.cod3r Just a heads up, real-time analysis is in the works. As far as sampling audio off the sound card, it's definitely doable but requires you to do the routing yourself. I think the setup is

    1. Route your system audio output in your OS to a virtual sound card.
    2. Then have that virtual sound card send the audio to a mic input in UE4.
    3. Mute output of UE4 audio submix to avoid digital feedback.

    I haven't tried that solution myself, but I've heard it work for others.

    Leave a comment:


  • replied
    ArthurBarthur I'm glad the ConstantQNRTSettings worked out for you. My guess is that it was either a blip that occured when calculating the ConstantQNRT or some edge cases with the settings. For future reference, you can run into a situation where the ConstantQ is looking for frequencies that don't exist in the audio file.

    For instance, if you have a sample rate of 48kHz, the maximum frequency in your audio file is 24kHz. Then, if you have some settings that look at 100 bands, your upper bands may go beyond 24kHz.

    Leave a comment:


  • replied
    Originally posted by ArthurBarthur View Post
    Question about the constantQ bit! Tried to make it have more than 48 bands(added octaves, so tried 60, 72..), but there doesn't seem to be any updated values coming out of the extra bands. It self-reports having more bands, but each band seems to be stuck at 0 value.
    I also tried changing starting frequency, but couldn't really see a difference then either.
    Doing changes works again if I create a new ConstantQNRTSettings asset and hook that up to the existing related assets. So maybe just that one old settings asset I was using for a while had been corrupted or something.

    Leave a comment:


  • replied
    Haha, all audio guys in one topic. Did you try it with Niagara? @cannabis.cod3r

    Leave a comment:


  • replied
    Question about the constantQ bit! Tried to make it have more than 48 bands(added octaves, so tried 60, 72..), but there doesn't seem to be any updated values coming out of the extra bands. It self-reports having more bands, but each band seems to be stuck at 0 value.
    I also tried changing starting frequency, but couldn't really see a difference then either.

    Leave a comment:


  • replied
    I think the Synesthesia plugin is 'just' non-realtime(NRT) for now?
    Here's a quick preview of what I'm doing with it - everything is baked and available! Some of the analysis tools has "get [analysis type] at time" nodes as access points in BPs..To bake that for myself, I made a little machine in construction script that incrementally reads the data every few milliseconds, and puts values+times into some dataset. https://youtu.be/Vi6MfUjyRpc


    Click image for larger version  Name:	ExtraBaked.png Views:	0 Size:	274.0 KB ID:	1696291
    Last edited by ArthurBarthur; 12-10-2019, 03:21 AM.

    Leave a comment:


  • replied
    Originally posted by saunassa View Post
    If you have any things you'd like to see in future version, let me know!
    Not realtime as well. For example if I want to get sample of existed audio file at specific time (loudness at specific frequencies, spectrum). Currently we have baked data, BUT can't access to it more or less direct way. If I can sample audio file, then I can 'predict' in do something in advance before event happen.

    Leave a comment:


  • replied
    hello saunassa nice road map. Is there somewhere a doc that explains how to use this ? Could nt find a way to test it .... and since it s baked, how would u sync it with a cue ? tks

    Leave a comment:


  • replied
    I'm in the business of music visualizations (for live DJs) and there are 2 features that I could really use:
    1. Real-time analysis with events dispatched to Blueprints so I'm able to respond to amplitudes in a range of frequencies that I'm interested in.
    2. Being able to analyze whatever is playing on the user's sound card. That means *anything* that is currently playing, from any application. This can be accomplished using loopback mode in WASAPI, a UE4 demo of which can be found here. (If you want to build the demo project, you'll need to wrap any calls to Windows headers in #include "PreWindowsApi.h" and #include "PostWindowsApi.h"). I was thinking this could be implemented in a similar manner to the AudioCaptureComponent but instead of using a microphone input, it would use WASAPI loopback.
    Those are two critical features for me at the moment. saunassa Thanks for your response, much appreciated.

    Leave a comment:


  • replied
    Hey Everyone, I've been the main man working on Audio Synesthesia. Super glad to see you all interested! I want to give you some quick notes on what's available in 4.24 and what we have planned for the near future.

    Audio Synesthesia In 4.24

    Audio Synesthesia is in a pretty early beta stage. As it stands in 4.24 there will be support for baked analysis (aka Non Real-time) and it will generate:
    • perceptual loudness over time
    • constant Q over time (Constant Q is like a spectrogram except that the frequencies are spaced more like notes in a scale)
    • audio onsets over time
    The analysis assets keep references to the sounds that created the underlying analysis data, but there's nothing that forces them to be used together. In blueprints you can pick whichever asset you'd like to use as a data source and sync it to whatever sound you'd like, or not sync it to any sound at all.

    Audio Synesthesia In The Future

    Audio Synesthesia in the future will absolutely support real time analysis for any algorithms that can be done in real time. That means loudness and constant Q will for sure be in there. Possibly onests will make it to with some modifications from the baked version.

    It will also have some usability updates and new analysis algorithms.

    On my short list of analyzers to add are
    • Beat Grid - This is only a non-real-time analyzer, but it's great for syncing animations to musical clips. It will fire off regularly spaced blueprint events just like you were tapping your foot to the music.
    • Peak partial tracking - This will break down a sound into a handful of 100 or so tones with amplitudes and trajectories
    • A regular old spectrogram

    If you have any things you'd like to see in future version, let me know!

    Leave a comment:


  • replied
    Originally posted by cannabis.cod3r View Post
    It looks like you add a new asset type and associate it with a sound, so it's likely baked. Not entirely sure what the setup is supposed to be though...
    I noticed this too and thought the same, but I'm hoping there might be a way to swap that audio asset out in game. If anyone has any ideas how to make use of this feature I would be really interested to know.

    Leave a comment:


  • replied
    It looks like you add a new asset type and associate it with a sound, so it's likely baked. Not entirely sure what the setup is supposed to be though...

    Leave a comment:


  • replied
    edit: I have no clue, didn't even know about this plugin before this
    Last edited by ArthurBarthur; 11-07-2019, 04:09 PM.

    Leave a comment:


  • replied
    Im assuming its real-time and its safe to assume that events are fired from the audio-thread to the game thread at the nearest or next frame.

    Leave a comment:

Working...
X