Unreal Engine 4.16 Preview

So the steam audio occlusion uses a OBJ mesh? That would be nice, creating a OBJ from the mesh shouldn’t be a problem, unless it needs to be setup in some specific way, like if the mesh is required to be convex.

Awsome, thx… managed to put together a quick test with my Vive controllers. Maaan, this is gonna be fun once i figure out what does what hehe :smiley:

watch?v=FR6wka9-HIg

Hi drunkenmaster!

Totally agree! @Minus_Kelvin may wish to expand on this, but I can say we already have streaming on Windows and PS4 and more platforms are currently in the works–including mobile platforms!

The obj should appear in your content folder currently, you could probably investigate from there! :smiley:

YES! Awesome!

Osc Frequency Mod modulates the frequency from the base value of the designated Oscillator (and there are two, Osc 0 and Osc 1).

For my 4.15 Audio Engine Sneak Peek (the February stream), I demonstrated a synth where I altered the frequency of 8 oscillators (4 Modular Synths) to get that rich movie theater logo sound.

Yes, audio streaming is working on all platforms (finally). I sorta didn’t want to trumpet it since it really should have been supported ages ago.

We’re using audio streaming currently on our internal projects and I recently spent a few weeks fixing a ton of random threading bugs (due to collisions with a new Asynchronous IO system, the audio thread, and the audio rendering thread). It’s still technically only visible in the “experimental” options in the editor, but at this point, mostly because I forgot to just make it visible for 4.16. I’ll make a JIRA so I don’t forget for 4.17.

This is wonderful news! Thank you so much, and for putting together all the great demos for the livestream and GDC talk. They did a great job showing off the new features and were fun to watch. (shameless request: I’d totally be interested in that demo project which was the UMG interface for a modular synth - if for nothing else than as a kind of interactive documentation of all the new synthesis features).

Oh that is great news!! That’s huge, really, even if we need to wait for 4.17. All in all, this new audio engine is a great update and I’m sure it’s the result of a lot of hard work so thanks again!

Im SO gonna watch that stream… im affraid i missed it. You happen to know where i can read up on what the parameters do and how they work? I assume there’s mapping to some MIDI stuff happening behind the scene? (sorry im total noob with Synth stuff, just saw it and thought it was super cool). Got so excited about it that i totally forgot what i was working on lol :smiley:

Searching the net for introductions to subtractive synthesis is probably a good start. It’s a technique that has been around for very many decades now with gazillions of examples so there should be plenty of info.

I dont have a great guide handy and the wikipedia page on the subject is rather brief but I will post a link to it anyway. Subtractive synthesis - Wikipedia

edited to add that there are also very many youtube videos introducing subtractive synthesis. It doesnt matter that much which hardware or software synths they use to demonstrate the concepts, the knowledge is still highly applicable to the new stuff in 4.16 :slight_smile:

Just one example: watch?v=PqFWIdTFF3U

Also worth noting there is a thread for the new audio engine that started back in march. As the thread goes on some useful info & screenshots start to emerge. eg manipulating Source Effects using blueprints.

Concerning the new “Unified Console Commands across VR platforms” feature … the old “HMD mirror mode” and other commands are now deprecated. Could you please give a brief overview over new commands and parameters? I’d like to have a 1920x1080 single-eye fullscreen mirror for use with HTC Vive.

I’m playing with the new FFT bloom. Everything looks really blurry and none of the bloom settings seem enabled. How are you supposed to use it properly?

OMG I have been waiting for this sound engine for 2 years, its the whole reason I learned UE to begin with! I am buying HTC Vive tomorrow and looking forward to delve into the next frontier of music and sound generating.

Are you by chance using either widget components or shaders using “Pixel Depth offset”? I just found an issue that causes the driver to hang like this and recently submitted repros to both to Epic and NVidia.

I use a widget compo on my characters for a health bar, but that aint it I don’t think, there is a post above where I listed what crashes it.

Yeah probably unrelated to what I found even though end result is the same.

Hey ,

We export an OBJ, but the .phononscene is the file that is used for occlusion and reverb. The OBJ file is more of a convenience thing in case you want to verify the geometry. Also, we don’t make any assumptions on geometry - we operate on triangle soup.

@.reynolds
when i run UHT in vs2017.it come up with an error msg in output window like" Unable to find field ‘ExternalDependenciesFile’ in '{manifest root}“.
i saw it was trouble by this code :GetJsonFieldValue(Result.ExternalDependenciesFile, RootObject, TEXT(“ExternalDependenciesFile”), TEXT(”{manifest root}"));
and UHT quit.how can i solve this problem. thanks.

@ben-marsh
do i need to put a key-value in UHTDebugging.manifest file? or some other way to do with it.

What do you need to do if you want to use the overall sound mix you hear or from a particular plugin like the web browser for example?

Im trying to take the audio from the web browser plugin and have that affect a material.