[FREE] AzSpeech plugin: Text-to-Speech, Speech-to-Text and more with Microsoft Azure Cognitive Services

I’m new to using Azure services in my project and somehow my speech recognition node is able to start the recognition but doesn’t do much else. It does crash the editor whenever I put in the audio input ID with a variable.

How can I get a completed or updating string value from the speech to text function? Am I missing some prior set-up? I’m currently using the marketplace plugin for ue5.3.2

Use private endpoint to uncheck

Thanks, I unchecked it but I’m still not getting further than the Recognition started. I say stuff and then wait for more than 15 seconds, but it never goes to Recognition Completed and neither do I get the print from Recognition Updated.

‘en’ might not be a available locale string, try ‘en-US’ or other legal strings. you can find the locale list in azure website.

Hello, I need some help. I have been trying to use the “text to speech with custom options” node to send viseme data to a metahuman animation blueprint in order to manipulate the curve. But I can’t get it to work. The animation node is also always blank. Does someone know how to do this or is there a tutorial I can follow? Furthermore i have also tried to use the “SSML to Speech with custom options” node but it always fails. I tried to connect the spoken words string to the “Synthesis SSML” but that also fails. If someone knows how to send viseme data to the metahumans I would love to know.
Thank you for reading and thank you in advance.

do you need an internet connection for this

do we need to pay microsoft everytime we use the voice service?

Hi,

I noticed that since the migration from 5.3 to 5.4 the function Convert .wav files to Soundwave is not working correctly anymore.
If you specifiy the outputModule as “Game” and add an AssetName it works fine, but clutters up the content browser.
If you dont specify an output Module and name to keep it transient, the audio cuts off after half a second. Is there a way to fix it or work around it, without keeping all the USoundWaves as assets?

Thanks in advance

Hello, I have relly hard time to package it for shipping mode and sitribution, developmetn mode works fine but shipping mode refuse to build.
AzureTest/Plugins/AzSpeech/Source/ThirdParty/AzureWrapper/libs/Win/Runtime/Microsoft.CognitiveServices.Speech.extension.codec.dll": 2 (The system cannot find the file specified.).
Packaging (Android (ASTC)): LogAzSpeech_Internal: Warning: LogLastError: Failed to load runtime library “C:/Users/Rene/Desktop/AzureTest/Plugins/AzSpeech/Source/ThirdParty/AzureWrapper/libs/Win/Runtime/Microsoft.CognitiveServices.Speech.extension.codec.dll”: 2 (The system cannot find the file specified.)

What did you do to manage to package this with andorid shipping? For me the app always crashes.