Speech Input Mappings (hololens)

I find the current implementation to be very limiting.

Right now you have to define the input phrase and the Action name it is associated to. Then you have to implement the Action Input node for each individual phrase.

This doesn’t allow any dynamic setup at all, every phrase has to be baked into the application, which is not ideal when you are trying to script externally during run-time.

Ideally there would be 1 speech action event, and it would return the ActionName associated with it.

Here is an example of a real world use-case, it’s a bit absurd. I can’t even zoom out far enough to see all the mappings.

Is there a way to set this up dynamically that I might have missed?