Ok, it got a bit long so I put a TL;DR here: Why would someone choose audio middleware when everything could be done in BP (or can it)?
Hello,
I am starting a new project with lots of nonstandard interactive audio tricks, and want to weigh the pros and cons of using audio middleware like FMOD or Wwise (and the like) vs. making everything in UE4. No prior expirience in either though, so its only reading the manuals, testing the free versions and watching some tuts.
As I can see, middleware offers:
- Very efficient workflows and even readymade instruments for standard game sound use cases (footsteps, wind, ballistics, event driven music).
- Lots of abstractions for layering and sequencing audio with respect to interactivity (Wwise containers, FMOD instruments).
- Better audiodesign workflow (lots of audio metaphors for the UI).
UE4 Audio on the other hand:
- Has tight integration into the BP system, no RPC overhead.
- As far as audio playback is concerned, can model everything middleware does at some programming cost.
- Custom instruments (analog/grain synth) and a strong user base/marketplace ecosystem to develop more of these.
What worries me a bit in UE4 is forward compatibility, the engine is just evolving very fast and future changes might break my project (which might need to build for 10+ yrs). The middleware providers are evolving also, but their development is decoupled from the engine and seems slower paced.
Does this make sense to you? Am I missing something?
Greetings,
Christoph