Announcement

Collapse
No announcement yet.

New Audio Engine: Quick-Start Guide

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • replied
    [MENTION=21219]SVR33[/MENTION]

    Yup, we used the Envelope Follower to drive Instanced Material Parameters to create a reactive effect.
    Last edited by dan.reynolds; 08-04-2017, 04:28 PM. Reason: Spelling

    Leave a comment:


  • replied
    Originally posted by kjekkbart View Post
    Not Dan here. Do you have the spoken dialogue ready as audio file, or do you need it to be reacting to the users voice, live? If it's audio files you can do it in blueprints Set up the 'Envelope Follower' source effect thing. Instructions are in the first or second post of this thread.
    Live voice is trickier(for now...dun-dun-duuun), but if you are cool with c++ you can do it.

    Have fun!
    Oh, that's perfect! Yes, I'm using audio files and, somehow, just completely missed this. Thanks so much!

    Leave a comment:


  • replied
    Not Dan here. Do you have the spoken dialogue ready as audio file, or do you need it to be reacting to the users voice, live? If it's audio files you can do it in blueprints Set up the 'Envelope Follower' source effect thing. Instructions are in the first or second post of this thread.
    Live voice is trickier(for now...dun-dun-duuun), but if you are cool with c++ you can do it.

    Have fun!

    Leave a comment:


  • replied
    Hi [MENTION=524867]dan.reynolds[/MENTION],

    In your GDC presentation, you had some materials that seemed to be influenced by audio waveforms. I'm trying to build a HAL 9000-esque interface that will modify color & size based upon spoken dialogue.

    Has anything like that been exposed in these 4.16 experimental features?

    Thanks!

    Leave a comment:


  • replied
    Originally posted by mindridellc View Post
    This would be great. Right now I have a Blueprint Actor which, when one of its variables is set on some type, it uses the Effect Chain, and when that variable is set to another type, it bypasses the chain. Swapping it out at runtime is not an option, since the Effect Chain is global, so one instance of that object's message to the chain overrides the other.
    Either making this instanceable, or making the ability to set the chain per Actor scriptable in Blueprint would work for me. I'm sure there are other workarounds I can do in the meantime, I just need to approach it with a fresh head, heh.

    Leave a comment:


  • replied
    Originally posted by dan.reynolds View Post
    Yes, they are globally controlled at the moment, but we're looking at ways we can modulate instances of the Source Effects without having to create a bunch of Instanced assets (like with Materials).
    This would be great. Right now I have a Blueprint Actor which, when one of its variables is set on some type, it uses the Effect Chain, and when that variable is set to another type, it bypasses the chain. Swapping it out at runtime is not an option, since the Effect Chain is global, so one instance of that object's message to the chain overrides the other.

    Leave a comment:


  • replied
    Originally posted by mindridellc View Post
    Hi Dan, another quick one for you. Is it possible to control which audio source the Modular Synth Component outputs to? Whenever I test my patch while in VR, the rest of my audio (which is coming from a different engine entirely, never mind that) is coming out of whatever my Windows System Setting is (my monitor) while just the Synthesizer component is playing only on my Rift headphones. If I PIE, it comes out of the monitor as expected.

    Thanks
    Hey Mindridellc! Are you using the new Audio Engine or the old Audio Engine? (-audiomixer)

    And you're saying that when you do VR Preview, the synth is coming out of your Rift and when you do regular PIE it comes out of your Windows default?

    And you say that when you do VR Preview, sounds being used by another engine come out of your Windows Default and NOT the Rift?


    The expectation from Unreal Engine's perspective is that if you're doing a VR Preview, the audio should come out of your Rift's headphones (or whatever VR device you have active), and that regular PIE should come out of whatever your computer default is OR whatever you've set as your default output for your Windows Project Settings:

    Click image for larger version

Name:	WindowsAudioDeviceProjectSettings.JPG
Views:	1
Size:	81.3 KB
ID:	1131517

    Leave a comment:


  • replied
    Hi Dan, another quick one for you. Is it possible to control which audio source the Modular Synth Component outputs to? Whenever I test my patch while in VR, the rest of my audio (which is coming from a different engine entirely, never mind that) is coming out of whatever my Windows System Setting is (my monitor) while just the Synthesizer component is playing only on my Rift headphones. If I PIE, it comes out of the monitor as expected.

    Thanks

    Leave a comment:


  • replied
    Yes, they are globally controlled at the moment, but we're looking at ways we can modulate instances of the Source Effects without having to create a bunch of Instanced assets (like with Materials).

    Leave a comment:


  • replied
    Never mind, I didn't see the Source Effect (Some Effect) Object type before, only the Source Effect (Some Effect) Preset type.

    Leave a comment:


  • replied
    It's not clear to me from the provided pictures how to modulate a value on a Source Effect Preset. How do I access a reference to the Source Effect Preset? Do I have to drill into it from an Effect Chain? Are Effect Chains and Source Effect Presets global instances, or are they per-instance uniquely modulated like Dynamic Material Instances?

    Leave a comment:


  • replied
    Originally posted by drfzjd View Post
    Hey Dan,

    I'm trying to build a VR stepsequencer with the Unreal Engine, using Wwise and the Oculus Spatializer for 3D-Audio. In the current state of the new audio engine, is there a way to play Soundfiles with sample-accuracy? Everything I tried until now was depending on the framerate, which obviously doesn't offer a very accurate timing for audio. But maybe I'm missing something.
    I'd love to see a short walkthrough of how to achieve sample-accurate timing if this is possible.

    Thank you for the great work on the new audio engine, the synthesis features are amazing!
    So part of the challenge (regardless of what engine you use) is when logic and user interaction traffic through the game thread and the game thread is synchronized with your frame rate.

    We don't have anything for scheduling events for inter-frame event calls out of the box.

    If you wish to use Blueprints, you will need to adhere to the limitations of your frame rate tick.

    With that said, in the new Audio Engine, if you make a play call for multiple audio files on the same frame execution, they will all play synchronized. If you mark Virtualize When Silent, they will continue to track playback even when silent.

    However, it's important to appreciate that there are many threads at work.

    You have the Game Thread, you have the Audio Logic Thread, and you have the Audio Rendering Thread. If you wanted, you could create an object in code that operates on the Audio Logic Thread and where visualizations wait for delegates from that thread. But we don't have a walk through for that and it's not a trivial thing to build.

    With that said, the Audio Logic thread can tick faster than the Game thread.

    Or you can optimize performance on your game thread to ensure high framerates.

    Leave a comment:


  • replied
    Hey Dan,

    I'm trying to build a VR stepsequencer with the Unreal Engine, using Wwise and the Oculus Spatializer for 3D-Audio. In the current state of the new audio engine, is there a way to play Soundfiles with sample-accuracy? Everything I tried until now was depending on the framerate, which obviously doesn't offer a very accurate timing for audio. But maybe I'm missing something.
    I'd love to see a short walkthrough of how to achieve sample-accurate timing if this is possible.

    Thank you for the great work on the new audio engine, the synthesis features are amazing!

    Leave a comment:


  • replied
    Originally posted by dan.reynolds View Post
    Make sure the Synthesis plugin is turned on--all the DSP effects are inside it.
    Oh WOW! that was it! thank you so much Dan. i totally spaced on that one!

    Leave a comment:


  • replied
    Originally posted by metalgunner87 View Post
    I followed the steps to use the new audio engine, and i can see all the new submixes and effect chain tools in my sound section. but when i try to create a new Source Effect Preset, the drop down window that pops up show up empty?! am i doing something wrong? i had it working previously on another project and now it just doesn't show up. could it be a sign that i am really not using the audio engine? i am using the command line -audiomixer in a short cut of duplicate of my engine launcher and the guide seems to suggest, but it doesn't seem to work. Any help would be greatly appreciated.
    Make sure the Synthesis plugin is turned on--all the DSP effects are inside it.

    Leave a comment:

Working...
X