Source Bus vs Sound Class?

Is the Source Bus intended to replace Sound Class? Or are they different?
Can anyone give a use case for each and explain the advantages/disadvantage?.. or if there is a place where all of the features of the new audio system (including how it interacts with the old system) is clearly documented with use case/examples/example projects that would be the most amazing thing possibly ever in the entire universe (no hyperbole there, i swear).
thanks.

Sound Classes are an arbitrary LOGICAL classification of sounds allowing you to perform logical mixing operations (attenuating volume or adjusting pitch en masse). There are a lot of other parameters that are set on Sound Classes which result in a weird conflation. Sound Class mixing actions are performed via Sound Mixes.

If you were to draw an analogy to your DAW, Sound Classes and Sound Mixes are like VCA groups. They enable automation on your sources logical volume amount en masse.

Source Buses are literally buses where you send actual audio to and from–they let you mix multiple sources together (literally rendering them into a single buffer of audio) and they allow you to treat them as if they are sources in the Level, so you can spatialize them and apply source effects on them, etc.

The DAW analogy of this would be like an Aux Bus that has its own fader and pan pot.

1 Like

ok, that makes sense. thank you, dan!

Hi Dan. Could you walk me (us) through a real world case, and why/when/how you would use soundclasses to solve X and why/when/how you would use submixes? Still have a hard time wrapping my head around the whole thing.
Have been doing audio the past 6 years in UE4, but Im starting a new project in UE5 and would love to know how I should approach these new tools (submixes vs soundclasses). Thanks.

I’m having real problems getting a real acoustic model up and running in Unreal 5.4.2 on Mac.

I bit like the original post, it is unclear where legacy and new audio processes/functionalioty overlap; what is duplicated, what is depracated (if any).

I have a large space next to a small space - very different reverbs, with a couple of loud sounds in the large space. there is an opening between the two spaces that the playere can walk through. One of the two sounds in the lareg space is close to this opening (thus the D/R ratio is vastly in favour of the dry for this sound source) and the other is far off (thus the D/R ratio is vastly in favour of the reverb.

On the sode of the opening in the large space I can set up the correct acoustic model using Audio game play volumes and appropriate submix settings:

  1. a submix each for dry send level and wet send level - both while listener inside the volume
  2. wet submix has an IR based reverb for the large space.

works well when inside the space, but walk through the door and in reality I should hear the source sounds with appropriate attenuations (set in the sounds themselves (be they Sound cues or Metasound based) AND the reverb of that larger space.

I am now outside the volume, but on this boundary (but just in the smaller space) I should hear both sets of reverbs, the reverb of the smaller roon should be applied to the sound entering through the opening (with its embedded reverb).

I am not trying to get this to work by seeding the audio of the larger space overall to a submix endpoint or similar in order to “re”-play the composite sound at the opening into the smaller space so I get the reverb from the larger space as well as that from the smaller space.

I am having difficulty accessing the combined mixed audio fro the wet and dry submixes such that I can feed these to something like an endpoint or similar to play at the opening into the smaller room.

Even if I did do this I’m thinking it will be challenging to have the spatialisation and level controi (attenuation) for this new “repeater” sound source as, for example, the attenuation curve for the distant loud sound source should, by rights, continue to fall as if unabstructed (ignoring occlusion effects for the moment - which add yet a further complication is accurately representing the acousitcs).

Is there a sollution that is not so crazily complex it is worthless trying to implement it.

I’ve tried using the same audio gameplay volume submixsend volumen as obove but for the listener outside the space - and with a fresh bussing system but for the smaller space (they sort of invert eachother when the listener is outside their respective space). This improvers thingsm but is still not right.

I have a submix graph with the hierarchy as attached, and ideally I’d like to take a feed out of each of the room_mix submixes whiihc I can then feed to endpoints at the openibng between the rooms, but because of the underlying audio system this will gives a warning of a loop and will not allow me to do this.

Can anyone advise wht needs to be done to solve this problem?