Volumetric Fog is now supported.
Wohaa!
Volumetric Fog is now supported.
Wohaa!
So am I understanding correctly that you need to tag meshes after they’ve been placed in the scene if you want them to occlude sound? Is there no way to have this be an automatic effect, or to assign the phonon geometry component tag to a mesh in the content browser instead of needing to apply it after they’ve been placed in the map?
This does seem like something that could be improved.
I had a bunch of crashes playing around with it yesterday which is understandable at this stage but there seemed to be a lot of extra setup compared to the oculus audio plugin.
I wonder what main differences are and why we should use one vs the other?
I’m afraid it’s hard to know without more details. Could you post a repro on Answer Hub? Thanks!
Sorry if i missed a post or something, but where can i see the new Synthesizer demonstrated and how to use it? Did it get covered on some stream?
There is a post earlier in this thread that is a starting point for now:
New volumetric lighting looks amazing. This is default project, default post-processing, and I switched the lights to fully dynamic. Can’t wait to see what experts do.
And here’s one with the lights in motion
Unfortunately i’m not able to reproduce it on the mannequin mesh, it seems to only appears with complex bones like hairs or chain…
Thanks for the info and fingers crossed you can get it working at least on the high end mobile VR devices for GearVR and Daydream support this version already (or perhaps a hotfix?)
Great question! At the moment, the Phonon Plugin uses a component approach to tagging meshes, which means you would either have to add it to the mash once in the scene OR create a special BP actor of that mesh.
An optimization I found was to create a BP Actor with an invisible cube mesh and a Geometry Component, and then I was able to add this to a complex scene like a blocking volume–but for reflections.
Hey xN31! I’m sure @freeman_valve can expound on improvements they’ve made that we weren’t able to get in already, but it’s worth repeating that it’s in an experimental state at the moment. Packaging is just one of the many things we’re eager to resolve as we refine and improve the implementation. In some ways, Steam Audio goes places our plugins have never gone before as it’s a very deep replacement of core audio engine features. We’re interested in streamlining this process so that Steam Audio and any other interested developer can make deep changes to how our audio engine works.
With that said, it’s worth exploring the Steam Audio Plugin Settings in your Project Settings:
The length of your impulse–the duration of the impulse–is your Impulse Response Duration, and the Indirect Contribution is like a gain value.
Once you have it working, you’ll want to go over these Global Project Settings to tune your performance and preference.
In my very brief time experimenting with the plugin for GDC, my preferences were to lean toward a higher ambisonics order (which increases the accuracy of the spherical response), a shorter impulse duration, and a slightly boosted indirect contribution. The result felt like a subtle early reflection, which was very nice for giving a feeling of physical presence to the level geometry, and then I paired it with our algorithmic reverb as an optimized and more aesthetically designed late reflection.
However, I think there are lots of opportunities for use and I’m excited to hear what people come up with and encourage experimentation.
(The HMD transition crash could be a bug with audio device swapping and the current version of the Steam Audio plugin)
Hiya TheJamsh,
There are a few steps involved in setting up an envelope follower, but once you have it, you can do whatever you want with the data; whether that’s feeding scalar values in a Material Instance or driving some other game parameter!
The first thing you’ll want to do is set up a Source Effect of the Envelope Follower type:
Then you’ll want to put this effect in a Source Effect Chain. Source Effect Chains are pre-attenuation effect chains processed in order of listing that you attach to a source sound (Like a SoundWave, or a SoundCue, or a Synth, or whatever you like).
Once created, you’ll want to attach it to your source sound like so:
Once you’ve established which sound you want to follow, in whatever actor you want to create the effect in, add a special component called the EnvelopeFollowerListener (this is a component that will link the Envelope Follower to a Delegate call in your BP). Additionally, you’ll want to add a reference to your Envelope Follower Source Sound Effect so you can link the two together.
Once you’ve added your Envelope Follower Source Effect Reference, you’ll want to make sure it’s referencing the correct asset:
Then you want to register your Envelope Follower Listener to listen to your Envelope Follower Source Effect–basically, this says, hey EnvelopeFollowerListener component, I want you to Listen to this specific EnvelopeFollower Source Effect. You can also unregister (which is awesome, because it means you can register and unregister based on blueprint logic).
Once you’ve registered them together, the On Envelope Follower Update event (which is bound to the EnvelopeFollowerListener component) will send out delegate information with float data you can use however you wish!
Question for the audio devs:
How hard is it to give that Phonon Geometry Component procedurally generated geometry, so geometry that is not a static mesh? Is it relatively simple to spawn a new Phonon Geometry Component at runtime and give it a vertex and index buffer or is that not possible?
I am not talking about “there’s a blueprint node for that”-possible, I’m more talking about “after a few hours of searching through the code and making things that are private public or adding a few small functions to the component, it will work”-possible, so is there a technical limitation that would make that not work?
Oculus GearVR splash screen no longer showing in 4.16P1: GearVR loading Splash screen no longer working in 4.16P1 - Platform & Builds - Epic Developer Community Forums
@.reynolds thanks, very informative Concerning my first two questions:
That sounds really cool! I have no idea!
I’m assuming you mean BSP. BSP, I believe, is either opt-all-in or not currently–though there’s a lot of different strategies for implementation and we’re still exploring those. If your code/script generates BSP in the level editor, then you could probably export the scene like normal after your script has done its work. If you do it in code, then I’m not sure, maybe you can tap into the OBJ export somehow.
“A few hours” is a few hours for one programmer and a few days for another and a few minutes for yet another–but I believe in you, man! It sounds like a really cool thing you’re attempting! Go for it!
It’s funny you should say that, because at the moment, there is a debug output in your Output Log when you start up the editor, a "LogAudioMixerDebug: " will list your audio device, sample rate, and all your output channels. This will be probably be turned off in the future.
As for your second question, yes, you can totally add the component in the details window–there should be an “+ Add Component” drop down menu available.
I’m not talking about BSP, I’m really talking about a fully procedural mesh where you only have the individual vertex positions and the index buffer for how to connect them. Something like you have with the Procedural Mesh Component.
So does that Phonon Geometry Component only need the vertex and index buffer from the static mesh, or does it need something like the simple collision that’s setup for the static mesh? Is there anything like that that would make it difficult to use procedural meshes with it?
Thanks!
I believe the obj is constructed from collision when available–but @freeman_valve would have more insight into your method!
Does this update include support for audio streaming from file? I saw the new Audio Engine stuff and got very excited, but I’m not clear on whether UE4 yet supports streaming audio. The new synthesizer stuff is rad but getting streaming audio on all platforms (esp. PS4/XB1) would be massively helpful.