I know there is a solution using audio buses and their respective read/write nodes in MS. However, in my specific use case I dynamically create attenuated metasounds where MS B acts as a DSP effect for MS A similar to a modular synth. Both sources can be audible at the same time on a level. Buses won’t work because they require predetermined authoring of project assets.
I’ve been thinking of directly outputting audio as a data type from a metasound instance, passing it through blueprints and then inputting into another metasound instance. Both instances of MS are localized in 3D space, so afaik metasound builder won’t work as it constructs a single metasound instance from multiple MS patches. I’ve seen it being using by folks from Harmonix but their use is different, because each module isn’t spatially audible.
What would be the correct way of doing something like this?