Thank you for the response, that makes sense and after some testing, came to this conclusion.
We are now trying just do have an open door/window, where the external audio is more or less occluded depending on you relative position to the opening. There is definitely occlusion happening, but there is a very extreme jump in level when you get to a certain point - it seems as though it goes from heavy occlusion to no occlusion at all when you move from just outside the position of the opening to standing in front of it.
I’m wondering if this is something to do with the fact its a VR based test, and maybe the traces are acting differently? I have tried tweaking pretty much every parameter and there seems to be no difference in the behaviour described above. Hope this makes sense!
Can you provide a short video of this in action? Generally, audio should sound the same irrespective of whether you’re in VR. I’d expect partial occlusion with a larger radius is what you’ll need, although you may require indirect sound depending on the setup.
Today we’re releasing Beta 10, which provides a bunch of useful improvements to the plugin. You can read the official announcement here, and check out the pull request here.
Hopefully, these changes will make their way into 4.19, but do give it a try early if you’re building from source. Any feedback greatly appreciated!
Changelist:
Changed probe batch finalization to handle large numbers of probes.
Fixed secondary ray generation bug that was causing significant energy loss.
Normalized data interpolation to smooth rendering of baked data.
Fixed ray leakage in indirect simulation.
Android support for spatialization and occlusion.
Added editor mode for easy access to Steam Audio functionality.
Added buttons to add/remove Phonon Geometry components to all static mesh actors.
Added button to export the scene to .obj for debugging.
Removed Phonon Scene actor in favor of export to file.
Probe generation and baking writes data to disk, to avoid slow loading and saving of map files.
Fixed incorrect material indices on scene export.
Added display of # of triangles in the Phonon scene.
That sounds very interesting, I am wondering though, how many of the changes affect the fully dynamic side of it and how much is only related to the baked stuff? So are there any improvements for a game where all geometry is procedurally generated?
I have been trying to get started with phonon materials to simulate how different building materials would let through sound. However, I can’t seem to wrap my head around how the workflow is supposed to work. I can be changing the material settings but the audio doens’t change and it seems like a mesh can automatically inherit the material settings from another mesh. Some times it seems like everything is working and when I change a material setting it falls apart. In short it’s a mess, but most likeyly I am missing something crucial.
Could someone explain when it is needed to do the following steps?
Would it be possible to make a Getting Started Template for this? I’ve been fiddeling with the plugin, but I fail to get any results… Maybe a map for the Content Examples project?
Thanks for that, can you point me to the plugin source file where you’ve included the glue into the UE4 raytracer? I’ve checked out the promoted UE4 branch. I’ve got a bunch of geometry that’s floating in zero-g, so I can’t really use this at all unless I’ve got dynamic geometry support.
Any plans to utilize OptiX/CUDA or some other type of hardware ray-tracing hook-in?
Because this update to steam audio isn’t out yet either. We checked it into a dev branch, and it should be on the github master branch now. It’s not in 4.18.
I was wondering if there are any plans to add ambisonic playback support anytime soon (ideally 1st, 2nd and 3rd order), including binaural decoding and soundfield rotations.
@freeman_valve I just asked a question in separate thread. dropping it here as well :
I’ve started implementing Steam Audio in my VR project. as it relies on Level Streaming I have to make sure that the plugin works well with sublevels.
Apparently, it doesn’t. tried “Export Scene” in each sublevel but at runtime, it only works for the Master Level. (this applies to probe volumes too)
Is it supported at all as I haven’t seen anything mentioned about it in the documentation ?
I hope there’s a workaround for this. because I can’t imagine not using Level Streaming. and baking scene in master level is not a solution as there are some serious overlaps between sublevels.
Is it known if 4.19’s VOIPTalker/Spatialization works with Steam Audio? Had no luck getting it configured the other day; VOIP seems to play non-spatialized even after I register the VOIPTalker component with the player state. Using OnlineSubsystemSteam as well for context.
In the documentation for 2.0 beta 10 (linked above), the Spatialization Algorithm is listed as SPATIALIZATION HRTF, and in 4.19 the only comparable setting I can find is listed as Spatialization Method “Binaural.” Is this just a format/naming change that doesn’t change functionality or is it an indication that I have something setup wrong? I ask because I haven’t been able to get the occlusion to work based off the walk-through given in the documentation. Thanks!
-Sam