You can do what you want. The intention isn’t to replace traditional sound design source material gathering and the whole import and play back samples, etc. Obviously a hardware synth that costs hundreds or thousands of dollars is going to sound better than any software synth. And obviously a software synth used in a DAW, with very different CPU and memory constraints, are going to have an advantage over our software synth. The whole point of MetaSounds is its context – interactive and procedural experiences.
The analogy shouldn’t surprise you: non-real-time graphics rendering (e.g. Pixar, etc) can generate better visual experiences than can be done in real-time. The purpose of graphical shaders is that it is procedurally generated and interactive (and dynamic). You can always just import a canned model, texture, etc, but at that point, it’s just a movie, right?
This is one reason why we make a deliberate comparison of MetaSounds to shaders – MetaSounds are the audio analog to shaders. There are obviously lots and lots of differences, but the fundamental idea is a programmable audio pipeline to allow for custom DSP processing that allows game audio to be fundamentally procedural and interactive with respect to the “game”. I put game in quotes because this technology is more powerful than a game – it’s interactive media experiences.
One additional point that some are failing to understand (and we did point this out in the stream with Rob’s presentation), is that what is in Early Access is definitely still Early Access. We hadn’t yet implemented composition (i.e. MetaSounds inside MetaSounds) and we hadn’t implemented Presets (reusable toplogy/graphs). The idea is that, at some point, non-technical sound designers can simply re-use already created presets. There will be libraries of already made graphs and accompanying presets and people will be able to preset surf to make their own sounds with existing MetaSounds.
Game Audio is and should be much more than import a .wav file and play it back. That’s absolutely an outdated mindset analogous to the old days in graphics where it was just textures and polys.
EDIT: Apparently I can’t reply to more than 3 people at once.
I will reply to the question higher up about mic input:
You can do that with the old sound system. We have a mic component that lets you process audio in real time. We can also record audio from Submixes to disk (.wav files).
And with EOS, we have our own VOIP system, that with a bit of elbow grease, can allow you to do DSP processing on VOIP signals. We have done this in Fortnite.