Unreal Engine 4.16 Preview

Is this going to have any mixed reality related improvements?

I mean, maybe not a full complete mixed reality solution, but a way to extract different cameras from the game to stream them…

Cheers and thanks!

I created a simple volumetric clouds from new material domain.
I think that there is someway to make it better.

Niiiiice norulex :slight_smile:

We do want to do this, but we have not had a chance to tackle it yet I’m afraid.

As for the new Audio Engine, I noticed in Aaron’s demo at GDC he was using an envelope to drive materials.

Are there new Material Nodes available for this or is this purely a Blueprint/C++ implementation? I made a frequency spectrum analyzer in 4.15 with C++, but it relies on storing compressed bulk data of the original .wav as an asset. Being able to read from sound-cues in real time would be nice!

EDIT: Totally gonna abuse the volumetric particle feature… Can these special particles also be driven with textures?

What happens with simulated bones on skeletal mesh and translation? it seems to be completely crazy… (like in the very first version of the engine)

@dan.reynolds Im also quite interested in this. Hopefully there’s now an easy way to affect materials and other things using audio.

How do you take the full sound mix you hear and plug that into a light or material for effects?

Volumetric Fog is now supported.


So am I understanding correctly that you need to tag meshes after they’ve been placed in the scene if you want them to occlude sound? Is there no way to have this be an automatic effect, or to assign the phonon geometry component tag to a mesh in the content browser instead of needing to apply it after they’ve been placed in the map?

This does seem like something that could be improved.
I had a bunch of crashes playing around with it yesterday which is understandable at this stage but there seemed to be a lot of extra setup compared to the oculus audio plugin.

I wonder what main differences are and why we should use one vs the other?

I’m afraid it’s hard to know without more details. Could you post a repro on Answer Hub? Thanks!

Sorry if i missed a post or something, but where can i see the new Synthesizer demonstrated and how to use it? Did it get covered on some stream?

There is a post earlier in this thread that is a starting point for now:

New volumetric lighting looks amazing. This is default project, default post-processing, and I switched the lights to fully dynamic. Can’t wait to see what experts do.

And here’s one with the lights in motion

Unfortunately i’m not able to reproduce it on the mannequin mesh, it seems to only appears with complex bones like hairs or chain…

Thanks for the info and fingers crossed you can get it working at least on the high end mobile VR devices for GearVR and Daydream support this version already (or perhaps a hotfix?) :slight_smile:

Great question! At the moment, the Phonon Plugin uses a component approach to tagging meshes, which means you would either have to add it to the mash once in the scene OR create a special BP actor of that mesh.

An optimization I found was to create a BP Actor with an invisible cube mesh and a Geometry Component, and then I was able to add this to a complex scene like a blocking volume–but for reflections.

Hey xN31! I’m sure @freeman_valve can expound on improvements they’ve made that we weren’t able to get in already, but it’s worth repeating that it’s in an experimental state at the moment. Packaging is just one of the many things we’re eager to resolve as we refine and improve the implementation. In some ways, Steam Audio goes places our plugins have never gone before as it’s a very deep replacement of core audio engine features. We’re interested in streamlining this process so that Steam Audio and any other interested developer can make deep changes to how our audio engine works.

With that said, it’s worth exploring the Steam Audio Plugin Settings in your Project Settings:


The length of your impulse–the duration of the impulse–is your Impulse Response Duration, and the Indirect Contribution is like a gain value.

Once you have it working, you’ll want to go over these Global Project Settings to tune your performance and preference.

In my very brief time experimenting with the plugin for GDC, my preferences were to lean toward a higher ambisonics order (which increases the accuracy of the spherical response), a shorter impulse duration, and a slightly boosted indirect contribution. The result felt like a subtle early reflection, which was very nice for giving a feeling of physical presence to the level geometry, and then I paired it with our algorithmic reverb as an optimized and more aesthetically designed late reflection.

However, I think there are lots of opportunities for use and I’m excited to hear what people come up with and encourage experimentation.

(The HMD transition crash could be a bug with audio device swapping and the current version of the Steam Audio plugin)

Hiya TheJamsh,

There are a few steps involved in setting up an envelope follower, but once you have it, you can do whatever you want with the data; whether that’s feeding scalar values in a Material Instance or driving some other game parameter!

The first thing you’ll want to do is set up a Source Effect of the Envelope Follower type:

Then you’ll want to put this effect in a Source Effect Chain. Source Effect Chains are pre-attenuation effect chains processed in order of listing that you attach to a source sound (Like a SoundWave, or a SoundCue, or a Synth, or whatever you like).

Once created, you’ll want to attach it to your source sound like so:

Once you’ve established which sound you want to follow, in whatever actor you want to create the effect in, add a special component called the EnvelopeFollowerListener (this is a component that will link the Envelope Follower to a Delegate call in your BP). Additionally, you’ll want to add a reference to your Envelope Follower Source Sound Effect so you can link the two together.

Once you’ve added your Envelope Follower Source Effect Reference, you’ll want to make sure it’s referencing the correct asset:

Then you want to register your Envelope Follower Listener to listen to your Envelope Follower Source Effect–basically, this says, hey EnvelopeFollowerListener component, I want you to Listen to this specific EnvelopeFollower Source Effect. You can also unregister (which is awesome, because it means you can register and unregister based on blueprint logic).

Once you’ve registered them together, the On Envelope Follower Update event (which is bound to the EnvelopeFollowerListener component) will send out delegate information with float data you can use however you wish!

Question for the audio devs:

How hard is it to give that Phonon Geometry Component procedurally generated geometry, so geometry that is not a static mesh? Is it relatively simple to spawn a new Phonon Geometry Component at runtime and give it a vertex and index buffer or is that not possible?

I am not talking about “there’s a blueprint node for that”-possible, I’m more talking about “after a few hours of searching through the code and making things that are private public or adding a few small functions to the component, it will work”-possible, so is there a technical limitation that would make that not work?