Sound Design: What do you want to know?

Hi everyone,

I’m really passionate about sharing my knowledge of sound design and UE4, and I’m keen to know if there’s an appetite for sound design tutorials. If you saw this type of video on YouTube, what sort of topic would you want to see?

I’m planning to cover everything from the very basics of recording and editing your own sounds at home, all the way up to using those sounds in UE4. This would include everything from footsteps to weapons, dialogue to ambient sounds, music to interface sounds, etc. It would cover UE4 cues, mixing, reactive mixing, reverb editing, volume control via user menus and a lot more. I’d also like to include tutorials on both FMOD and Wwise (covering installation and coupling with UE4 all the way up to using that middleware to manage audio using blueprints etc.). I’m also thinking of looking at taking existing free projects on the Marketplace and demonstrating how to change or add sounds to those.

A sort of one-stop shop that assumes no prior knowledge and which focuses on providing practical tutorials in bite-sized videos.

Thanks,

Ash @ Valkyrie Sound

I would say narrow it way down. Basics like recording at home have been covered to death by people on youtube. Save the time and effort for more useful information :slight_smile:

I don’t know why people use stuff like FMOD. The engine seems perfectly capable on it’s own already am I missing something? There is already a great tutorial on FMOD integration but I have never seen anyone cover Wwise so that might be worth doing.

As far as controlling sound in the engine there are quite a few tutorials and example projects covering the basics of it like changing volume and building an audio settings menu in your game. What they don’t cover though is stuff like cues, reverb and so on. The most useful thing you could do a tutorial on is how audio assets are handled within the engine. Tutorials on attenuation, reverb, occlusion etc. Also some less obvious things like keeping the soundtrack of the game consistent both in quality and volume. Basically a best practices for audio assets. Even though it’s been done many times something on using audio to your full advantage artistically in your game could be great. Everyone has a different perspective on that or some additional bits of advise on it so you can’t have enough people talking about that really.

I have a decent understanding of a lot of this so for me I’d like to learn more about creating dynamic sound. Is that the right way to describe it? Basically using lots of individual audio assets to create a mix that changes based on what is happening in game. I know the engine can do a lot more now but I haven’t tried actually creating audio with the engine so that is still a mystery too. Those are a bit more advanced though so maybe start with the other stuff lol.

Hey, thanks for your replies. Sorry for not replying myself until now - thought I had email notifications on for this thread and I didn’t :’\

@TerrorMedia - yes, this is what I was afraid of haha spending hours and hours making content that’s already out there! I really do like the suggestion of demonstrating the Engine’s capabilities in more detail, looking into reverb / attenuation / occlusion etc. I’d like to think I’ve got a good grasp of the theory behind this too (!); there can be quite a lot of crossover between game sound design and sound design for films, but there are key differences and I think that could be worth exploring, especially as Epic forges on with film-making.

@TerrorMedia + @ClavosTech I think the suggestion to look at sound volumes and general mixing is really good too. This is something I’d argue is more about the capture / recording stage, but there are techniques to manage recording volumes on ‘found’ recordings from sound libraries prior to importing to UE4. Can definitely cover that. I released a mad little game called ‘Beat the Meat’ (sorry) for itch.io’s Scream Jam. The character footsteps are far too loud in the mix imo… but we were using Unity (gasp; against my will) and the audio system there is apparently less manageable (according to our programmer).

On that score, FMOD and Wwise have superior firepower over UE4’s inbuilt audio capabilities. UE4 is certainly better than it was, and anyone could run a full game without the audio middleware. However, audio middleware offers greater control for sound effects and music - it’s quite a bit easier to cue and blend contextual music. Having said that… it is totally possible in UE4 and there’s no reason not to use it just because. For more complex sound jobs and it probably makes more sense to use FMOD / Wwise which are engines dedicated to the task in a way that UE4 isn’t. (Generally it’s accepted that Wwise is top dog, with FMOD below, and the Engine last. This depends on personal preference and project complexity; FMOD is easier to learn; Wwise offers more control.)

“Dynamic sound” is a good phrase; dynamic, responsive, contextual - they pretty much mean the same thing: the audio changes depending on the input. Re-reading your sentence, perhaps “procedurally generated sound” is more appropriate: audio that is triggered by actions / events in game but which isn’t pre-scripted - it changes over time or it changes every time it is triggered. Footsteps are a basic example, gunshots, weapon impacts, music, even dialogue - everything can be procedurally generated so long as the rules governing a proc gen system are clearly defined. It leads to some pretty nice outcomes but it can get complicated quite quickly.

@ClavosTech clipping can be managed through UE4’s sound mixing function, which is definitely something I can cover. You can categorise sounds and mix them discretely from one another, e.g. all player footsteps in one mix, all ambient in another, all NPC weapons etc… In theory that can be quite powerful because you can then start looking at moving mixes around depending on what’s going on, prioritising certain audio mixes over others, such as dampening ambient and weapon sounds during dialogue, or if you wanted player attention focused on a point through an audio cue. Attenuation settings (how a sound is perceived in relation to your distance from it) can be very useful here as well, and all of these things can be combined to help optimise performance.

Clipping can just be an issue with the original recording like you point out; tbh, rather than working around the issue it’s probably best to re-record, if possible. Processing it in an audio package prior to importing would be useful. In my audio packs (Cyberpunk City Sounds (tag below) and a Cyberpunk Weapons one coming soon, once Epic approve) I made a point of mixing all audio assets to the same decible limit. This means that, unmixed, everything sounds as loud as everything else - but it means you have greater control when mixing during the design process. It also means you don’t have to increase the volume on what are typically quiet sounds like stealth footsteps so you avoid static / noise from having too much gain and therefore have a perceived higher quality. I believe this is really useful for a plug-n-play audio pack you can use right out of the box.

Thanks very much for your input on this - it’s given me a fair few ideas to start with and I appreciate it!

Yeah I know exactly what you mean. There is a lot of crossover in animation and effects as well. A lot of the old school hollywood movie tricks are still pretty relevant these days when it comes to designing visuals in a game. Audio is no different. I actually started out with graphics design then moved on to animation which eventually lead to game making because every time I changed I realized I needed more if I wanted to provide the kind of experience I wanted to.

Anyhow I still think we need more guides regarding audio in the engine I think there is plenty that hasn’t been touched yet and plenty that really should have been a long time ago.

What I need to know is I have this quest system with dialogue sounds , the NPC dialogue sound plays fine and the Answer sound plays fine as long as there is one answer in the data table . But if there is more than one answer (say 2), when I click on the 2nd answer I hear the sound for the first answer always. The quest advances correctly for which text I click , its just the sound that plays different from the text I click. I just want the correct sound wave to play for the correct answer text I click.

I tried adding the AnswerSoundIndex array , but I cant get it to select the same sound as when I click the text.

Hey @razmaz51 - I haven’t delved that deep into dialogue yet (it was something like 17th on my list of tutorials). I’ll have a look this week and see what I can come up with :slight_smile: That’s a weird situation though.

Man I certainly would appreciate any help , I have been unable to figure out a way to make it work. Like I say , it will play the NPC dialogue correctly and also the first answer correctly. But if more than one answer is added , Then I could only get it play either the first answer sound or the second answer sound, no matter which answer I clicked on. I am just trying to get it to play the same sound as the answer I click on. I tried bools and all kinds of other things but no luck. I think the “get” node coming in to break the answer struct gets the first sound by its nature. Its possible that in some case (like if I hear the last answer sound when I click on the first answer), it may be playing both answer sounds in order , but I only hear the last one. But i’m not sure.

Set up enumeration for your list of answers and switch based on which you would need like switching an animation state or anything else. It should allow you to control exactly which answer is played based on which you click.

Not sure exactly what you mean , you have a quick demonstration pic? thanks for reply

  1. create a blueprint–> enumeration in your content browser
  2. Fill the enum with your options
  3. when referencing the enum anywhere else, use BP function “switch on enum”

Most anim BPs with animation states or weapon states will have an enum for you to use as reference.

Ok so looking at the data table above , what do I put in the Enum? Will the AnswerSoundIndex variable work?

I mean , what would I put in the Enum fields? To then access in the widget bp?

I put Answer Sound Index 0 and 1 in the enum , then called that Enum with switch on enum , then from the Enum, sent the first index to play answer sound 1 , and the second index to play answer sound 2 , but still when I click on answer 2 it plays the sound from answer 1.

Have you gotten any further on this @razmaz51? I’ll be able to look at it properly on the weekend.

No i been trying and tried the Enum suggestion mentioned on last page in this post , but it still will not play the correct sound for the answer text that is clicked, I put answer sound and answer sound 2 sound wave variables and directed one to answer sound and one to answer sound 2 so they were separated code playing on separated sound wave variables, but still wont do it.

Its a little spagettied up but it like this.

Basically most of the spagetti is the target pins going to the audio. Not even sure if the Enum thing will work , but someone suggested it so I figured I try it. On page 1 in the post , the last pic is the data table with answer 0 and answer 1 , each has its sound field ,but it always plays the first answer sound , even when I click the first or second answer text. If I connect the "element " variable to the “get” node coming in to break the answer struct , then it always plays the second answer sound , even if I click the first. (or like I say , with the element connected up to the “get” , its possible it may be playing both sounds in order but I only hear the second sound). I normally would think it shouldn’t be too hard to get the sound to play for the same answer I click.But it been stubborn.

Just spent the last couple hours creating a dialogue system and I can replicate your problem. I think we need to do another pass on the audio - I can get the first line of dialogue to show as text and play a sound but although the second line of text shows the audio isn’t playing.

Gonna keep at it :slight_smile:

Going to have to come back to this. My setup is a little different to yours, but I can’t get the second sound cue to play either. I have a feeling that this is because we’re not moving through the Data Table rows properly. Although my text dialogue moves on, the Line ID keeps dropping back to 1 - it never actually gets to 2 when I interact with the NPC to get the next line of dialogue. The dialogue comes up as text, but not as audio…

I think that whatever is driving the text dialogue along isn’t interacting with the sound cue in the same way - but I can’t figure it out…

I’ll take another look tomorrow. It is pretty stubborn!

Thanks for checking into this. At least I know that it’s not just me, it is stubborn , no?