Where is the text found when using Voice SDK dication on Meta Quest?

Hi. I have a VR app built for Quest 2.
I have the Voice SDK setup and setup dictation.
The Meta Quest/Wit.ai docs are pretty limited.
Pretty sure everything is set up correctly eg. plugin, blueprints for dictation, wit.ai client/server keys, etc.
In the blueprint for dication there is an “ActivateDictation” function and hooked this up to a print function and it does seem to start b/c it’ll print “start” to the screen.

But once dictation is activated I can’t work out the functions or where the dictated script would be located. There are a couple of events like full transcript event or partial transcript event - but printing directly from these doesn’t seem to work.

Any advice would be great.

Thx

Apologies if you have already found a solution but I’ll post it here in case you or someone else requires help. By the way, is this post on StackOverflow also you?

From the sounds of it, you just need to create another Blueprint that inherits from either WitVoiceExperience or AppVoiceExperience, as it seems the DictationExperience won’t function without either of them, as shown in the image below.

From some quick tests, it seems best to go with AppVoiceExperience as this one will return Intents if you plan to use them but if you just need it to transcribe, either is fine. From there, assign the Data Asset Configuration in the newly created VoiceExperience. I have a screenshot below of where to add it.

I’ve uploaded a project (uses Unreal 5.2) on Google Drive that has the Blueprints set up for you in case you still have trouble. You just need to paste in your Client Access and Server Access tokens. You can play in Selected Viewport or VR Preview, then hold down the Spacebar to get both the Dictation and Voice Experiences to listen to the mic.

I haven’t looked to see if there is any significant difference between the Dictation and Voice Experience.


Some troubleshooting tips if you require it.

  1. In the DefaultEngine.ini, make sure you have the following setting somewhere, this allows the VoiceSDK to listen to your mic.
[Voice]
bEnabled=True
  1. If it’s not picking up your voice, you might need to open the Windows settings > System > Sound > App volume and preferences (at the bottom), then for the Project, explicitly set what microphone is used as the input.

  1. Test your microphone in apps like Audacity. Should help isolate the problem.

  2. If you’re testing the project through Oculus Link and you’re using the headset’s microphone as the input, make sure the microphone volume is not muted in the Link Dashboard settings.

Hope this helps. Let me know if you have any issues or questions.

Kind regards,

Kent