Different ways to play a sound

I saw a few ways to play a sound in BP:

  • Play sound at location
  • Spawn sound at location/attached
  • Add Audio component and call Play() on it

When to use which method?


I know this is an old post, but I was trying to find out the answer to this exact question and am sad to see nobody weighed in. Bump!

The question cant really be answered without going trough the entire audio pipeline for a project, so needs to be compressed a bit - in most cases, use the one that gets you what you want really.

Also maybe make a new question concerning this topping, (just be more specific), instead of bringing back a question from a year ago.

// the only thing I can say on this is that I would use a audio component if I wanted sound to be looping, like an engine maybe. Sometimes you also want to pre-load large audio files since spawning them causes the audio to be instanced at run time, probably causing slight lag while it loads, (if its large).

well hope that helped somewhat.

It’s actually pretty simple; the functions are pretty much ordered as follows in terms of functionality (from simplest to most complicated). I’m giving examples but they’re not necessarily ones I recommend as good design choices, they only serve as illustrations!

  • Play sound at location: “Spawns a sound at the given location. This does not travel with any actor. Replication is also not handled at this point.” Meaning it plays a sound at a fixed location in the world and then dies forever; in the case of a multiplayer game only the player issuing it will be able to hear it. More precisely in terms of code: no AudioComponent gets created at all. Say, an explosion on a fixed position in a solo game.
  • Spawn sound at location “Spawns a sound at the given location. This does not travel with any actor. Replication is also not handled at this point.” This pretty much does the same thing as above except that it does it through an AudioComponent; so you will be able to use it to interact with the sound (change pitch/low pass/etc., have a delegate being called when the sound is finished, etc.) You may also be able to move it around, but notice that it’s not attached to any actual actor so you cannot use it to follow e.g. a pawn (it’s technically attached to a “transient” actor). Once again, it’s a purely “local” sound. I struggle to find a real world example here.
  • Spawn sound attached “Plays a sound attached to and following the specified component. This is a fire and forget sound. Replication is also not handled at this point…” Same as the above but you’re able to specify a component to attach the sound to, hence automatically following it. Think a rocket with a whoosh sound.
  • There are numerous variants for 2D sounds (UI, 2D games…) but they follow that pattern.

The alternative is to have an AudioComponent attached to an actor and play a sound on it. If you do have an actor and you specifically need to play a sound relatively to it (probably 80% of its usage in most games), that’s the best approach as it makes ownership of the sound much clearer: its management is the responsibility of the actor it belongs to. Even for some not-so-obvious extradiegetic cases such as music/commentator it’s probably easier to have a manager actor manage it through AudioComponents.

For bigger games where you do care about multiple sounds being played at the same time you’d probably also better stay away from fire-and-forget sounds that, by being oblivious of each other, may stack too much and drown your mix while making it harder to debug.


Ran into some trouble tonight with sounds, and this helped me grasp very clearly whats going on under the hood. Thanks for sharing.