Announcement

Collapse
No announcement yet.

New Audio Engine: Quick-Start Guide

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • replied
    Originally posted by Th120 View Post
    alright, so the -audiomixer parameter has no impact anymore?
    I am working with a small team and for some even setting the right ini value in the launcher engine might be a challenge.
    -audiomixer still works. But if you want it on by default for YOUR project, then you need to set up the config override. You can check ini files into your version control so that everyone gets them. I can't imagine how you're collaborating otherwise.

    Leave a comment:


  • replied
    alright, so the -audiomixer parameter has no impact anymore?
    I am working with a small team and for some even setting the right ini value in the launcher engine might be a challenge.

    Leave a comment:


  • replied
    Originally posted by Th120 View Post
    Is the new audio engine already default @ 4.19?
    Is it possible that I can run it by default? The last time I tried it I had to start the ed with -audiomixer and after that i had to open my project; I usually just open by clicking on the project file (I tried to put it in the projectfile link, but it did not work back then).
    It is not on by default, we want to road test it on Fortnite before we consider it properly released--that's just our due diligence. Follow the instructions at the top of the thread for adding the config file, this will make sure it's always on for your project.

    Leave a comment:


  • replied
    Is the new audio engine already default @ 4.19?
    Is it possible that I can run it by default? The last time I tried it I had to start the ed with -audiomixer and after that i had to open my project; I usually just open by clicking on the project file (I tried to put it in the projectfile link, but it did not work back then).

    Leave a comment:


  • replied
    Originally posted by ConcreteGames View Post


    I just began to work again with the synths modules (both modular and granular) and I still have a Mac editor freeze when using a blueprint actor in a scene with a synth activated and started in it. When playing in editor in viewport and stopping with Esc key, the editor freezes. BUT, when playing from a blueprint window, it's ok. Which is very strange!

    Thanks for the heads-up, ConcreteGames; I'll see if I can repro this issue. Did you make sure you're running the New Audio Engine?

    Leave a comment:


  • replied

    Originally posted by dan.reynolds View Post

    Hi Tomavatars! Thanks for the report, I'll talk to Ethan if he has an idea about this!
    I just began to work again with the synths modules (both modular and granular) and I still have a Mac editor freeze when using a blueprint actor in a scene with a synth activated and started in it. When playing in editor in viewport and stopping with Esc key, the editor freezes. BUT, when playing from a blueprint window, it's ok. Which is very strange!


    Leave a comment:


  • replied
    Originally posted by Elvince View Post
    Hi,

    I may have missed information but does the new Audio engine is still in Beta? If yes, any final date in mind?

    Thanks,
    Hi Elvince, it is still in Early Access. We are currently working toward shipping it with one of our major titles--it's important to us to have road tested it on an internally shipped title before officially switching over.

    With that said, a few titles out in the wild are already using it, including the soon-to-be-released game, A Way Out.

    Leave a comment:


  • replied
    Originally posted by Shempii View Post

    Hey Dan thank you for the reply!

    As a quick hack I was able to do just as you suggested, using 2 different synths to make a kick.

    When you say it gets a bit weedy, do you mean that the patch that I was trying to achieve isn't possible due to one bug or another? Or are there further advanced steps required to patch individual oscillator gain/freq/etc? If it's a bug, no big deal. But if you have a solution I would really appreciate it if you would share some details on how to make that work.

    I'm nitpicking but really great work on this! I'm having fun building out wacky sound contraptions. Thanks a lot!
    The advanced patches may have bugs, it's not well tested because of the possible combinatorics.

    Leave a comment:


  • replied
    Originally posted by dan.reynolds View Post

    Hi Arj!

    Yeah the patch system can get a bit weedy. When I made my drum kit for our GDC floor demo, I conceded to having two synthsizers per kit piece. A bit pricier, but it was way easier to program.
    Hey Dan thank you for the reply!

    As a quick hack I was able to do just as you suggested, using 2 different synths to make a kick.

    When you say it gets a bit weedy, do you mean that the patch that I was trying to achieve isn't possible due to one bug or another? Or are there further advanced steps required to patch individual oscillator gain/freq/etc? If it's a bug, no big deal. But if you have a solution I would really appreciate it if you would share some details on how to make that work.

    I'm nitpicking but really great work on this! I'm having fun building out wacky sound contraptions. Thanks a lot!

    Leave a comment:


  • replied
    Hi,

    I may have missed information but does the new Audio engine is still in Beta? If yes, any final date in mind?

    Thanks,

    Leave a comment:


  • replied
    Sweet, It'll be from a mic or from output from a DAW. I'll check out VoiceMeeter

    Leave a comment:


  • replied
    No need for C++ really, I saw local mic capture with envelope(amplitude, not diff frequencies) in 4.19 changelog, Can use the older visualization plugin to get diff freq values, or set up your own little machine that does it, with the tools and effects in new audio engine.
    What kind of audio are you going for to drive it? If it's OS audio, and mic is working, we can always virtually route pc audio through to mic "input" with programs like VoiceMeeter. Beware of conversion to mono and other mic eccentricities.
    There's probably already a better way to do all this, I forget..

    Leave a comment:


  • replied
    [
    Originally posted by ArthurBarthur View Post
    Not Dan here. Do you have the spoken dialogue ready as audio file, or do you need it to be reacting to the users voice, live? If it's audio files you can do it in blueprints Set up the 'Envelope Follower' source effect thing. Instructions are in the first or second post of this thread.
    Live voice is trickier(for now...dun-dun-duuun), but if you are cool with c++ you can do it.

    Have fun!
    What kind of C++ magic would it take to make this work? I know enough to cobble things together and planning out a visual installation using projection mapping in a VR cave with jellyfish swimming around a tank and want to drive the colors of the jellys from live audio (smaller jellys are mapped to higher frequencies, medium jellys respond to mid-range, and large jellys respond to low frequencies.) I have 4.19 set up now to work with Omnidome for projection mapping. Thanks!

    Leave a comment:


  • replied
    Hi,

    Excited to get into the stuff in the new audio engine. I have a couple questions involving the best way to build a music system in BP that I think tie into that.

    Currently we are on UE4.17 and planning to jump to 4.19 when it’s out. I note that timing stuff was covered in this thread back around post #73 from @drfzjd.

    Probably the most critical timing thing for me is tracking playback time of a music file, and stopping it at designated “exit points” where we then play/stitch an “ending stinger” Cue.

    To track timing for the currently playing music cue, we are multiplying % of Cue’s progress by its duration. So for instance 43% complete * 1:12.434. We have a binding from the audio component’s OnAudioPlaybackPercent event to multiply the Percent float that it outputs by the duration of the sound cue (https://docs.unrealengine.com/latest...aybackPercent/).

    This brings me to my first question: Is this the most accurate way to monitor a music Cue’s time?


    Also, I just watched the “Procedural Audio in the new Unreal Audio Engine” video from May of last year. At about 43 minutes in, Aaron mentions that he addressed some stuff where the old audio engine was not queueing up and executing events at the same time.

    Next question: He mentions this was done for 4.16, but is it in the new audio engine that you have to enable or part of the default one at this point?


    Ultimately I’m hoping to be able to stop a track and play an ending stinger with <20ms of latency, so not exactly “sample accuracy”. Still testing, but may already be there. One thing that appeared to be causing the end stinger cues to play late is if the game requests to stop current Cue, and next exit point is not far enough away. After some experimentation it looks like it’s best to skip an exit point and go to next if it’s <0.5 seconds after the request.


    Final question(s):

    If we switched to new audio engine now with 4.17:
    • Are things pretty much the same, stability-wise if we aren’t using any of the new plugins?
    • Will existing audio related BP or Sound Cue nodes change in functionality at all?
    Thanks

    Leave a comment:


  • replied
    Any way of getting a SynthComponent to output its audio through an ASIO audio device?

    Leave a comment:

Working...
X