a few 'newbie' questions about unreal audio - orientation, in-cue concurrency, audio pathing

Hello, I’m working on a commercial unreal FPS - and the team has asked me to advise on audio toolsets/engine. I have spent 12+ years using middleware and other proprietary engines/audio tools, but this is the first time kicking the wheels on Unreals built in audio resources. So, I’m a newbie w vanilla unreal audio tools - and would love some breadcrumbs :smiley:

How is orientation achieved? IE I have a gun and want to apply attenuation (maybe -6db) and some filtering at 180 degrees behind. How do you do this in unreal? I see the cone shape, but it doesn’t seem it was intended for this exactly, and doesn’t support user entered attenuation and filtering across angles. Is that correct? How are you guys handling this?

I have layered gun-fire within a cue. I want the ‘shot’ layer to allow some stacking (max instances =3) while the other layers Max instance =1. Concurrency doesn’t look like it can be set WITHIN a Cue. Is that right? If not, aren’t I risking losing (or at least complicating) tight sync between gun layer? If I break up these layers into different Cues, what do you do to ensure very tight sync (sample accurate) between the cues? (I believe layers within the Cue is sample accurate and single thread. Is that correct?)

I’m having a hard time researching audio pathing and occlusion. I am familiar with the basic built in trace-based occlusion, but is a little limited for our needs, so we’re looking to a pathing solution similar to Overwatch’s ‘path diversion’ approach. Is there a pathing system that I can piggy back on? Would AI / Nav Mesh suffice?

Lastly, I am digging Quartz. Looks like a great system for for fast dynamic fire rates. I worked on another commercial FPS that had a very similar proprietary system that worked great for automatic gunfire. I see Quartz is pretty young. How has your experience been with it? Ready for prime time?


Another unanswered legit audio question…