Hi everyone,
Me and my team are working on a “Souls-like” game in Unreal Engine and I’m at the point where I want to start guiding the programmers towards the implementation of the core “sound system”. I want the audio to function similarly to other “Dark Souls” game, where sound is not just atmosphere, but also “gameplay-critical information”. This means:
- 3D positional audio for combat, footsteps, and environment
- Surface-based footsteps (stone, wood, metal, water)
- Dynamic occlusion & reverb for rooms, halls, and outdoor areas
- Music state changes (exploration vs combat, boss phase transitions)
- Stealth noise system (e.g., loudness values for actions that AI can detect)
- Dynamic mixing so combat SFX cut through ambience and music
I’m unsure where to start in Unreal to lay a good foundation for this.
Would you recommend:
- Starting with Unreal’s native Audio Mixer and Sound Cues/MetaSounds?
- Integrating middleware like “Wwise” or “FMOD” right away for scalability?
- Building a small test level for all core sound features before expanding?
I’m mainly looking for “best practices” for:
- Setting up the “initial architecture” so it’s easy to expand later
- Managing performance while keeping high-quality audio
If anyone has experience with Sound Systems then give a shout. Thanks a lot guys!