Announcement

Collapse
No announcement yet.

New Audio Engine: Early Access Quick-Start Guide

Collapse
This is a sticky topic.
X
X
  • Filter
  • Time
  • Show
Clear All
new posts

    Hi there quick question about patches:

    I've wired a very simple patch with Source->envelope and Destination->gain and it works great - the envelope correctly affecting the patched destination (in this case, overall gain). However, when I change the destination to be Destination->osc 1gain it seems to have no effect. Is there something that I'm missing? I'm seeing a similar behavior with any of the individual osc parameters (gain, freq, etc)

    I'm trying to use osc 1 and osc 2 to make a sort of 808 sounding bass kick where there's the clicky sound (noise in osc1) and the resonating bass sound (sine in osc2). In order to do that I'm trying to apply different adsr envs to each oscillator for the two different parts of this sound. I assume that changing the gain on osc 1 independent of osc 2 is possible, otherwise I don't see why there would be a distinction between osc 1 and 2 offered in the patch destination dropdown.

    I'm very new to synthesis but have been doing lots of outside reading to learn the basics. Is there one simple step that I'm missing or perhaps a parameter that is obviously set incorrectly but I wouldn't know?

    Thanks,

    - Arj

    Comment


      Hey Dan!
      I just upgraded to 4.18 on Mac, and when I play in editor a synth patch that works, when I stop the game, it freezes the editor. Am I doing something wrong and is it only a Mac problem?
      Thanks!

      Comment


        After investigation it is due to the start/stop behavior. You need to use the stop node if you want to avoid editor freeze. Which is tricky.

        Comment


          Originally posted by Tomavatars View Post
          After investigation it is due to the start/stop behavior. You need to use the stop node if you want to avoid editor freeze. Which is tricky.
          Hi Tomavatars! Thanks for the report, I'll talk to Ethan if he has an idea about this!
          Dan Reynolds
          Technical Sound Designer || Unreal Audio Engine Dev Team

          Comment


            Originally posted by ArjunTheMiella View Post
            Hi there quick question about patches:

            I've wired a very simple patch with Source->envelope and Destination->gain and it works great - the envelope correctly affecting the patched destination (in this case, overall gain). However, when I change the destination to be Destination->osc 1gain it seems to have no effect. Is there something that I'm missing? I'm seeing a similar behavior with any of the individual osc parameters (gain, freq, etc)

            I'm trying to use osc 1 and osc 2 to make a sort of 808 sounding bass kick where there's the clicky sound (noise in osc1) and the resonating bass sound (sine in osc2). In order to do that I'm trying to apply different adsr envs to each oscillator for the two different parts of this sound. I assume that changing the gain on osc 1 independent of osc 2 is possible, otherwise I don't see why there would be a distinction between osc 1 and 2 offered in the patch destination dropdown.

            I'm very new to synthesis but have been doing lots of outside reading to learn the basics. Is there one simple step that I'm missing or perhaps a parameter that is obviously set incorrectly but I wouldn't know?

            Thanks,

            - Arj
            Hi Arj!

            Yeah the patch system can get a bit weedy. When I made my drum kit for our GDC floor demo, I conceded to having two synthsizers per kit piece. A bit pricier, but it was way easier to program.
            Dan Reynolds
            Technical Sound Designer || Unreal Audio Engine Dev Team

            Comment


              Originally posted by rasamaya View Post
              for the android and ios .ini files could I force mute if there is no headphone detected? I saw a workaround for Unity, but cant get this to work with Unreal. I basically dont want audio to play or just have zero volume, if there are no headphones being used. Any help would be super.
              Hi Rasamaya!

              You will need to take advantage of some kind of device notification message. You will probably need to look into the APIs for the various devices, as they will differ.

              You can create a mute button though and use the SoundMix system to establish 0.0f volume audio on the Master SoundClass.
              Dan Reynolds
              Technical Sound Designer || Unreal Audio Engine Dev Team

              Comment


                Hey Dan! First of all, awesome work. It's amazing to see Epic putting more and more resources into audio development. I'm currently working on some kind of audio visualization. For that I need to get the frequencies of the played audio. I'm basically trying to map my sound frequencies to color values. However, when I'm using the "Compute frequency spectrum" node, which I think was developed on an Epic Friday and isn't documented at all, I get weird values I can't really wrap my head around. So my question: Is their a way with either the new Audio Engine or older built in stuff like the mentioned node to get the frequency data of my sounds?

                Comment


                  This is an inspirational addition to the engine. My mind is a raging torrent of imagination with what I could do with this.
                  http://unrealdeveloper.uk

                  Comment


                    Originally posted by Doublezer0 View Post
                    This is an inspirational addition to the engine. My mind is a raging torrent of imagination with what I could do with this.

                    Dan Reynolds
                    Technical Sound Designer || Unreal Audio Engine Dev Team

                    Comment


                      Originally posted by mountainking View Post
                      Hey Dan! First of all, awesome work. It's amazing to see Epic putting more and more resources into audio development. I'm currently working on some kind of audio visualization. For that I need to get the frequencies of the played audio. I'm basically trying to map my sound frequencies to color values. However, when I'm using the "Compute frequency spectrum" node, which I think was developed on an Epic Friday and isn't documented at all, I get weird values I can't really wrap my head around. So my question: Is their a way with either the new Audio Engine or older built in stuff like the mentioned node to get the frequency data of my sounds?
                      We do have an implementation of KissFFT in the engine (which allows frequency domain analysis), but a proper Spectral analyzer hasn't been implemented yet, definitely something we want to get around to doing though!

                      I don't remember the old visualizer, but I believe it's spitting out non-normalized audio values. So you'll want to probably get the absolute value of the output and scale it from Integer to Float (0.0f to 1.0f) ranges.
                      Dan Reynolds
                      Technical Sound Designer || Unreal Audio Engine Dev Team

                      Comment


                        Any way of getting a SynthComponent to output its audio through an ASIO audio device?

                        Comment


                          Hi,

                          Excited to get into the stuff in the new audio engine. I have a couple questions involving the best way to build a music system in BP that I think tie into that.

                          Currently we are on UE4.17 and planning to jump to 4.19 when it’s out. I note that timing stuff was covered in this thread back around post #73 from @drfzjd.

                          Probably the most critical timing thing for me is tracking playback time of a music file, and stopping it at designated “exit points” where we then play/stitch an “ending stinger” Cue.

                          To track timing for the currently playing music cue, we are multiplying % of Cue’s progress by its duration. So for instance 43% complete * 1:12.434. We have a binding from the audio component’s OnAudioPlaybackPercent event to multiply the Percent float that it outputs by the duration of the sound cue (https://docs.unrealengine.com/latest...aybackPercent/).

                          This brings me to my first question: Is this the most accurate way to monitor a music Cue’s time?


                          Also, I just watched the “Procedural Audio in the new Unreal Audio Engine” video from May of last year. At about 43 minutes in, Aaron mentions that he addressed some stuff where the old audio engine was not queueing up and executing events at the same time.

                          Next question: He mentions this was done for 4.16, but is it in the new audio engine that you have to enable or part of the default one at this point?


                          Ultimately I’m hoping to be able to stop a track and play an ending stinger with <20ms of latency, so not exactly “sample accuracy”. Still testing, but may already be there. One thing that appeared to be causing the end stinger cues to play late is if the game requests to stop current Cue, and next exit point is not far enough away. After some experimentation it looks like it’s best to skip an exit point and go to next if it’s <0.5 seconds after the request.


                          Final question(s):

                          If we switched to new audio engine now with 4.17:
                          • Are things pretty much the same, stability-wise if we aren’t using any of the new plugins?
                          • Will existing audio related BP or Sound Cue nodes change in functionality at all?
                          Thanks

                          Comment


                            [
                            Originally posted by ArthurBarthur View Post
                            Not Dan here. Do you have the spoken dialogue ready as audio file, or do you need it to be reacting to the users voice, live? If it's audio files you can do it in blueprints Set up the 'Envelope Follower' source effect thing. Instructions are in the first or second post of this thread.
                            Live voice is trickier(for now...dun-dun-duuun), but if you are cool with c++ you can do it.

                            Have fun!
                            What kind of C++ magic would it take to make this work? I know enough to cobble things together and planning out a visual installation using projection mapping in a VR cave with jellyfish swimming around a tank and want to drive the colors of the jellys from live audio (smaller jellys are mapped to higher frequencies, medium jellys respond to mid-range, and large jellys respond to low frequencies.) I have 4.19 set up now to work with Omnidome for projection mapping. Thanks!

                            Comment


                              No need for C++ really, I saw local mic capture with envelope(amplitude, not diff frequencies) in 4.19 changelog, Can use the older visualization plugin to get diff freq values, or set up your own little machine that does it, with the tools and effects in new audio engine.
                              What kind of audio are you going for to drive it? If it's OS audio, and mic is working, we can always virtually route pc audio through to mic "input" with programs like VoiceMeeter. Beware of conversion to mono and other mic eccentricities.
                              There's probably already a better way to do all this, I forget..

                              Comment


                                Sweet, It'll be from a mic or from output from a DAW. I'll check out VoiceMeeter

                                Comment

                                Working...
                                X