Announcement

Collapse
No announcement yet.

New Audio Engine: Early Access Quick-Start Guide

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • replied
    Originally posted by ConcreteGames View Post


    I just began to work again with the synths modules (both modular and granular) and I still have a Mac editor freeze when using a blueprint actor in a scene with a synth activated and started in it. When playing in editor in viewport and stopping with Esc key, the editor freezes. BUT, when playing from a blueprint window, it's ok. Which is very strange!

    Thanks for the heads-up, ConcreteGames; I'll see if I can repro this issue. Did you make sure you're running the New Audio Engine?

    Leave a comment:


  • replied

    Originally posted by dan.reynolds View Post

    Hi Tomavatars! Thanks for the report, I'll talk to Ethan if he has an idea about this!
    I just began to work again with the synths modules (both modular and granular) and I still have a Mac editor freeze when using a blueprint actor in a scene with a synth activated and started in it. When playing in editor in viewport and stopping with Esc key, the editor freezes. BUT, when playing from a blueprint window, it's ok. Which is very strange!


    Leave a comment:


  • replied
    Originally posted by Elvince View Post
    Hi,

    I may have missed information but does the new Audio engine is still in Beta? If yes, any final date in mind?

    Thanks,
    Hi Elvince, it is still in Early Access. We are currently working toward shipping it with one of our major titles--it's important to us to have road tested it on an internally shipped title before officially switching over.

    With that said, a few titles out in the wild are already using it, including the soon-to-be-released game, A Way Out.

    Leave a comment:


  • replied
    Originally posted by Shempii View Post

    Hey Dan thank you for the reply!

    As a quick hack I was able to do just as you suggested, using 2 different synths to make a kick.

    When you say it gets a bit weedy, do you mean that the patch that I was trying to achieve isn't possible due to one bug or another? Or are there further advanced steps required to patch individual oscillator gain/freq/etc? If it's a bug, no big deal. But if you have a solution I would really appreciate it if you would share some details on how to make that work.

    I'm nitpicking but really great work on this! I'm having fun building out wacky sound contraptions. Thanks a lot!
    The advanced patches may have bugs, it's not well tested because of the possible combinatorics.

    Leave a comment:


  • replied
    Originally posted by dan.reynolds View Post

    Hi Arj!

    Yeah the patch system can get a bit weedy. When I made my drum kit for our GDC floor demo, I conceded to having two synthsizers per kit piece. A bit pricier, but it was way easier to program.
    Hey Dan thank you for the reply!

    As a quick hack I was able to do just as you suggested, using 2 different synths to make a kick.

    When you say it gets a bit weedy, do you mean that the patch that I was trying to achieve isn't possible due to one bug or another? Or are there further advanced steps required to patch individual oscillator gain/freq/etc? If it's a bug, no big deal. But if you have a solution I would really appreciate it if you would share some details on how to make that work.

    I'm nitpicking but really great work on this! I'm having fun building out wacky sound contraptions. Thanks a lot!

    Leave a comment:


  • replied
    Hi,

    I may have missed information but does the new Audio engine is still in Beta? If yes, any final date in mind?

    Thanks,

    Leave a comment:


  • replied
    Sweet, It'll be from a mic or from output from a DAW. I'll check out VoiceMeeter

    Leave a comment:


  • replied
    No need for C++ really, I saw local mic capture with envelope(amplitude, not diff frequencies) in 4.19 changelog, Can use the older visualization plugin to get diff freq values, or set up your own little machine that does it, with the tools and effects in new audio engine.
    What kind of audio are you going for to drive it? If it's OS audio, and mic is working, we can always virtually route pc audio through to mic "input" with programs like VoiceMeeter. Beware of conversion to mono and other mic eccentricities.
    There's probably already a better way to do all this, I forget..

    Leave a comment:


  • replied
    [
    Originally posted by ArthurBarthur View Post
    Not Dan here. Do you have the spoken dialogue ready as audio file, or do you need it to be reacting to the users voice, live? If it's audio files you can do it in blueprints Set up the 'Envelope Follower' source effect thing. Instructions are in the first or second post of this thread.
    Live voice is trickier(for now...dun-dun-duuun), but if you are cool with c++ you can do it.

    Have fun!
    What kind of C++ magic would it take to make this work? I know enough to cobble things together and planning out a visual installation using projection mapping in a VR cave with jellyfish swimming around a tank and want to drive the colors of the jellys from live audio (smaller jellys are mapped to higher frequencies, medium jellys respond to mid-range, and large jellys respond to low frequencies.) I have 4.19 set up now to work with Omnidome for projection mapping. Thanks!

    Leave a comment:


  • replied
    Hi,

    Excited to get into the stuff in the new audio engine. I have a couple questions involving the best way to build a music system in BP that I think tie into that.

    Currently we are on UE4.17 and planning to jump to 4.19 when it’s out. I note that timing stuff was covered in this thread back around post #73 from @drfzjd.

    Probably the most critical timing thing for me is tracking playback time of a music file, and stopping it at designated “exit points” where we then play/stitch an “ending stinger” Cue.

    To track timing for the currently playing music cue, we are multiplying % of Cue’s progress by its duration. So for instance 43% complete * 1:12.434. We have a binding from the audio component’s OnAudioPlaybackPercent event to multiply the Percent float that it outputs by the duration of the sound cue (https://docs.unrealengine.com/latest...aybackPercent/).

    This brings me to my first question: Is this the most accurate way to monitor a music Cue’s time?


    Also, I just watched the “Procedural Audio in the new Unreal Audio Engine” video from May of last year. At about 43 minutes in, Aaron mentions that he addressed some stuff where the old audio engine was not queueing up and executing events at the same time.

    Next question: He mentions this was done for 4.16, but is it in the new audio engine that you have to enable or part of the default one at this point?


    Ultimately I’m hoping to be able to stop a track and play an ending stinger with <20ms of latency, so not exactly “sample accuracy”. Still testing, but may already be there. One thing that appeared to be causing the end stinger cues to play late is if the game requests to stop current Cue, and next exit point is not far enough away. After some experimentation it looks like it’s best to skip an exit point and go to next if it’s <0.5 seconds after the request.


    Final question(s):

    If we switched to new audio engine now with 4.17:
    • Are things pretty much the same, stability-wise if we aren’t using any of the new plugins?
    • Will existing audio related BP or Sound Cue nodes change in functionality at all?
    Thanks

    Leave a comment:


  • replied
    Any way of getting a SynthComponent to output its audio through an ASIO audio device?

    Leave a comment:


  • replied
    Originally posted by mountainking View Post
    Hey Dan! First of all, awesome work. It's amazing to see Epic putting more and more resources into audio development. I'm currently working on some kind of audio visualization. For that I need to get the frequencies of the played audio. I'm basically trying to map my sound frequencies to color values. However, when I'm using the "Compute frequency spectrum" node, which I think was developed on an Epic Friday and isn't documented at all, I get weird values I can't really wrap my head around. So my question: Is their a way with either the new Audio Engine or older built in stuff like the mentioned node to get the frequency data of my sounds?
    We do have an implementation of KissFFT in the engine (which allows frequency domain analysis), but a proper Spectral analyzer hasn't been implemented yet, definitely something we want to get around to doing though!

    I don't remember the old visualizer, but I believe it's spitting out non-normalized audio values. So you'll want to probably get the absolute value of the output and scale it from Integer to Float (0.0f to 1.0f) ranges.

    Leave a comment:


  • replied
    Originally posted by Doublezer0 View Post
    This is an inspirational addition to the engine. My mind is a raging torrent of imagination with what I could do with this.

    Leave a comment:


  • replied
    This is an inspirational addition to the engine. My mind is a raging torrent of imagination with what I could do with this.

    Leave a comment:


  • replied
    Hey Dan! First of all, awesome work. It's amazing to see Epic putting more and more resources into audio development. I'm currently working on some kind of audio visualization. For that I need to get the frequencies of the played audio. I'm basically trying to map my sound frequencies to color values. However, when I'm using the "Compute frequency spectrum" node, which I think was developed on an Epic Friday and isn't documented at all, I get weird values I can't really wrap my head around. So my question: Is their a way with either the new Audio Engine or older built in stuff like the mentioned node to get the frequency data of my sounds?

    Leave a comment:

Working...
X