Download

Steam Audio vs Resonance vs Oculus Audio vs UE4 Audio

Any opinions on which is best for spatialization, occlusion, reverb?
How do they compare to stock UE4?

Are there any test maps available comparing the above and, if not, is it possible for Epic to make some available that demonstrate its features/how the tech is intended to be used (for example, the built in reverb demonstrated in some of the UE4 audio presentations sounds pretty great, but would love to be able to pull those test maps down to get a better understanding of how to use it)?

Which is best for ease of use/iteration speed (a.k.a. workflow for a sound designer)?

Can you mix and match at all? i.e. Use Steam’s plugin, but use, either additionally or instead of, UE4’s built in reverb? If so, what would be the best way to do that?
Ideally, there would be either one or a combination of a few that could then work on all platforms… almost kind of an “aggregate device” of plugin functionality.

Also, is it worth it? In other words, are the gains (overall sound quality) from enabling the new tech and the extra authoring/testing involved worth the cost in time?

We are currently leaning towards Steam Audio, but would love to hear opinions and any examples/test maps that are available to pull down and check out.

Thanks!

All of these are really great questions to ask and I’ll leave it to the community to offer their impressions.

The first question I would ask yourself is how important is spatialization to your game experience? What are your goals and how will spatialization feature in your project?

Steam Audio uses game geometry to build an acoustic impulse (using physical fluid simulation) which means that its goals lean toward creating an acoustic simulation.

Google Resonance’s goals are to bring binaural audio experiences to mobile devices, so their aims try to achieve fixed costs on the processes that normally explode CPU budgets per source.

Oculus straddles the middle-ground. Their goals are more focused on simulating spatialization and not as heavy in creating physically simulated acoustic environments but rather algorithmic approximations of spaces.

Because of these differing goals all of their spatialization algorithms will sound different and on some, certain features might not be available.

Finally, not all plugins will return the audio back to Unreal, so in some cases, it will be difficult to mix and match certain features; in other cases, some features depend on other features to be activated down the signal stream; in yet other cases, mixing and matching totally is fine.

Platform is another consideration. Not all plugins support all of the other platforms, some platforms don’t currently have anything, and some platforms have exclusive systems.

So my recommendation is to look at the project itself, what are you trying to achieve in terms of spatialization, occlusion, reverb, etc.

The built-in panning and occlusion may be simple, but are low-cost and effective in most use-cases. The built-in reverb sounds really great.

As far as demo maps, I’m working on it. :slight_smile:

This project is starting, platform-wise, on Windows/Steam, and may end up on PS4 down the road.

1st person game and has a lot to do with scale and being inside or outside objects like, for example, a tree.

The potential flexibility of Steam Audio seems to meet our needs, but I have to get the Occlusion to work properly, despite going step by step with this:
https://github.com/ValveSoftware/steam-audio/releases/download/v2.0-beta.10/steamaudio_unreal_manual_2.0-beta.10.pdf

Due to the extra layers of authoring, so far the inability to perceive an increase in quality, and a potentially heavy CPU hit if we end up doing Real Time processing, we may end up back with stock UE4. The obstruction/occlusion and potential for IR based reverbs are the alluring part. If UE4 had an easy way to do Portals (a la Wwise) we’d probably just stick with stock… but ultimately, it comes down to whatever fits within tech limitations, sounds best and has the easiest workflow.

Do you know if there is other documentation for Oculus specifically for UE4?
https://developer.oculus.com/documentation/audiosdk/latest/concepts/book-audio-intro/

Most of the stuff I am finding is saying “use FMOD or Wwise,” -which is not terribly helpful- and doesn’t have walk-throughs like Steam Audio and Google Resonance do.

Thanks for your reply/thoughts, Dan. Much appreciated!
Oh, and very much looking forward to those demo maps! Will they appear as a sticky here on this forum?
-Sam

In one form or another either a sticky or a sticky link.

If you’re going to PSVR, then you should know that PSVR has yet another spatialization algorithm. Just to add to your dizzying confusion. :slight_smile:

So far, my biggest challenge in creating a demo map for Steam Audio has been balancing the perf costs along with the VERY long bake times (making iteration slow going).

The updates to Oculus are new, I haven’t had much of a chance to play with them yet, so I’ll be going off this documentation and what I can learn on my own:
https://developer.oculus.com/documentation/unreal/latest/concepts/unreal-engine/

As far as Portals, it’s an interesting problem. We recently released a feature called Source Buses which allows you to route or send source audio through a Source Bus which acts like a source in the world.

So you could potentially send all the audio of an interior to a Source Bus that you situate at a door space or window or something and then route the audio through that and just set up some kind of attenuation curve that respects the idea that once you’re at the Source Bus you can’t hear it.

I’ve been meaning to do experiments along these lines, but there’s only so much time for me to do things in a day.

So, I noticed in the Steam Audio thread that they do not currently support ambisonics. In 4.19, you guys added support for 1st order _ambix/_fuma, so would it be possible to use 4.19’s ability to playback ambisonics with Steam Audio (and how would you do it)?.. or is it a case of one or the other? Also, you may have noticed my other thread that setting your ambisonics to stream crashes the editor immediately for me.

I’d imagine you guys are spread pretty thin, but you have pulled the audio engine a LONG way forward from what it was and that is much appreciated/exciting to see develop.

We built the file handling and submixing as a support for Google Resonance, but we’ve considered building a native decoder. So the idea is on our radar, for sure!

Adding to the mix…

I’ve built a real-time sound occlusion system (link in my signature) which honestly is pure awesomeness ^^

It can do things with physical materials that you cannot achieve with Steam Audio:

https://youtu.be/cFwthqyRyxk

Hey birdjunk / everyone,

just thought I’d chime here 'cos I’ve been looking at the various options of Steam Audio vs native UE4 vs Google Resonance, vs Occulus in the context of developing for VR (windows only).

I’ve gotta say, I feel everyone’s pain at the moment regarding the a) choice of options, b) the fact that they are all fairly new and c) as yet not properly tested plug ins, and d) the fact that documentation / examples / tutorials are few and far between or in some cases, inaccurate.

Here is what I’ve found so far which I’m happy to share:

Spatialization:
Steam Audio was my favourite. It seemed to sound better than the native UE4 after side by side testing. Better sense of direction overall which led to a more immersive experience. I haven’t looked at Google resonance yet, it’s next on my list. Occulus I can’t comment on too much other than the fact that I haven’t used it in a while. Last time I did it was from within FMOD Studio and the plug in made things sound weird and quite phase-y. It’s probably improved since then…?

Occlusion:
Native UE4: Whilst I appreciate the built-in Occlusion in UE4 is efficient, it seems a bit too basic for my needs. I really want partial occlusion and transmission. I did use it in some levels though and it seemed to work OK for what it is. At least it is easy to use / set up. It would be cool if native UE4 supported geometry-specific audio occlusion characteristics out of the box, in the same way that Steam Audio has Phonon Geometry / Materials.
Steam Audio Occlusion: I really like how this sounds. I think that the material defaults they give you are a bit off but once you start tweaking those custom values (transmission frequencies etc) it sounds great. When used alongside the sound propagation / Reverb settings for indirect sound it could be a killer combination. My worry is that I have not yet been able to profile how expensive this is at run time, and in VR we are getting hammered from all sides in terms of performance.
Steam Audio Propagation / Reverb baking. Has anyone got this working properly yet? As Dan says in another thread, the bake times seem very long, but worse than that, I am getting inconsistent results after baking the data, e.g., sometimes the reverb goes crazy. I would hope that it would sound the same or very similar to the settings when they are ‘real time’ but this is not the case.

I’d be very interested to hear from anyone who is using Steam Audio for everything and having a smoothe, trouble-free time!

Reverb:
Native UE4: New reverb sounds great, but as far as I am aware, it is not dynamic and needs to be changed manually, according to the space. Please correct me if I’m wrong.
Steam Audio: Have talked about this a bit above. The reverb I have heard sounds OK but I have not had the chance to test out any longer reverbs. However, the effect of having the reverb change with geometry is pretty darn cool (e.g. firing a gun while up against the wall and getting a bottom end boost in the reverb). Shame you can only choose one reverb as part of the project settings! Steam audio seems more flexible for room response, UE4 seems better for traditional, more spacious reverbs.

BTW, I’m new here as I’ve recently made the switch to native UE4 audio. Before that I mainly worked in FMOD so… hi!

Cheers,

PS

dan.reynolds
Unreal Engine Developer

https://forums.unrealengine.com/core/images/epic/badges/Badge_R_EpicStaff.png

[LIST]

  • Join Date: Aug 2016
  • Posts: 263

#2](https://forums.unrealengine.com/development-discussion/audio/1462152-steam-audio-vs-resonance-vs-oculus-audio-vs-ue4-audio?p=1462204#post1462204)
04-17-2018, 06:31 AM

Finally, not all plugins will return the audio back to Unreal, so in some cases, it will be difficult to mix and match certain features; in other cases, some features depend on other features to be activated down the signal stream; in yet other cases, mixing and matching totally is fine

[/LIST] Are there any more details of which plug ins do not return the audio to Unreal, and which ones do?

@**bookerTjones, **were you able to get steam’s occlusion to partially occlude? for me, if you walked behind a wall, no matter how i had the custom values set, it would occlude the sound 100 percent. also, i haven’t been able to get it to do much in the way of obstruction, despite the multiple raycasts which, i would think, would allow for that.

i think you may be able to handle the built in reverb with a combination of the sub mix and the attached picture of the node in blueprint, but dan would know better.

at the moment, we are moving towards using resonance (or just sticking to built in) due to its spatialization sounding the best out of the box to my ears, it supposedly being efficient, as well as it properly playing back ambisonic files locked to the world.
steam is powerful, but also has been very inconsistent for me… and with its potential cpu overhead (baking being a time suck and potential journey to hell authoring-wise), too risky.
oculus, yes, still has that weird phasey/flangey thing going on and have no idea how to access most of its features (documentation says use wwise/fmod)… i thought it might be user error, though.

looking forward to answers to the questions you posted and thanks for your thoughts!

re: Steam Audio’s Reverb and consistency

In my experience, a lot of this comes down to the probe placement and density. Your probes interpolate, so if you have probes in walls or doing weird things, sometimes you can get strange results. I suggest carefully placing your probes in your level.

re: Native Reverb

It is an algorithmic Plate style reverb. Using Audio Volumes, you can set a Master Reverb which will interp values as you transition from Volume to Volume. Additionally, Audio Volumes have Distance based send parameters which allow you to modulate the wet amount based on distance. The effect is quite convincing:

In-Game Reverb Demo from our 4.15 Preview Stream

re: Resonance Economy, PSVR, and Signal Chain Considerations

As I understand it, Resonance mixes source audio to Ambisonics and then applies a single spatialization process to the mixed-down audio (this creates a fixed cost on a single HRTF process instead of a per-source cost on multiple HRTF processes). In this way, spatialized Resonance audio would not be able to benefit from arbitrary submixing or submix effects (e.g. submix compression on some mixed sources, etc.). Another example would be PSVR’s A3D , which does its spatialization pass in hardware, so once you want to spatialize the sound, it leaves the Engine entirely, never to return.

For those cases, Aaron has implemented the option to split those sources off before spatialization so you can send them to the Master Reverb Submix, but there are obviously more complicated submix scenarios that would be impossible once you’ve decided to spatialize completely on hardware (dynamics in particular).

@birdjunk apologies for delayed response - my post was held up in moderation for a few days as it was my first one and I only just noticed the replies from you and Dan.

Yes, I was able to get Steam Audio to partially occlude, with effective results. I know this sounds obvious, but make sure that the Direct Occlusion Method is set to ‘Partial’ in the Phonon Occlusion Source Settings (it’s in the manual but sometimes these things are easy to miss. My tests were conducted using Steam Audio v2.0 Beta 10 and UE 4.19.1 and I simply followed the guide step by step).

I was also able to get some good results with varying the ambisonics order of Indirect Sound in the Steam Audio plug in settings menu. With a higher order, it was obvious that audio sources in my test level were bouncing around off reflective surfaces more, sometimes in a slightly odd way but mostly in a cool manner :). The best results came from when the Reverb simulation type was set to ‘Real Time’ which I’m 99.9% sure simply won’t be an option at run time, due to the massive increase in ray casts. So, it seems that the whole Steam Audio thing will rely on baked data being reliable. @dan.reynolds thanks for the info re the UE4 reverb, I look forward to getting my hands dirty with it! Still a bit confused about probes though, I tried it by using both ‘Uniform Floor’ and a single ‘Centroid’ probe and the results were the same. I will continue to fiddle with it to see if I can get any more reliable results.

Cheers.

So when you’re baking reverbs, you have a set of impulses generated during the bake process. Each set consists of either Source to Listener pairs (labeled by Source Name) or Listener to Listener pairs (labeled by reverb ) although Steam Audio groups probes by Probe Volume. Each individual probe is a single impulse generated during the baking phase. If you have a probe volume with a single Centroid probe, that means there is only one impulse generated for that Probe Volume. If you do uniform distribution, then likely several impulses will be generated for that Probe Volume.

The result of using uniform distribution with a higher density is a more accurate acoustic model but at the cost of a much larger impulse set.

@dan.reynolds are neighboring probes interpolated across probe volumes, or once you are within a probe volume you only get interpolation of probes within that volume?

What I found when trying to use uniform distribution on its own was if my head was lowered or raised above the height of the probes, there would be no indirect audio at all (tested by boosting indirect contribution in I think the steam audio reverb plugin settings assets). I would have assumed even if head was near floor it would still get the bake from probes above.

Would adding a separate volume with a centroid probe make it fall back to that instead?

(Edit: I got an answer from Freeman, with the uniform floor probes their radius will get set based on closest neighbor probe, and if you don’t have other probes in your level when you leave the radius of any of those you get a drop out, he says it should be able to be filled in by overlapping sparser probes if you do multiple probe volumes)

Also, has anyone figured out a workaround for the only works on one level issue with Steam Audio? It seems it stores the exported data for different levels in different files, but whichever level you load at game startup you are stuck with.