I’m new to using Azure services in my project and somehow my speech recognition node is able to start the recognition but doesn’t do much else. It does crash the editor whenever I put in the audio input ID with a variable.
How can I get a completed or updating string value from the speech to text function? Am I missing some prior set-up? I’m currently using the marketplace plugin for ue5.3.2
Thanks, I unchecked it but I’m still not getting further than the Recognition started. I say stuff and then wait for more than 15 seconds, but it never goes to Recognition Completed and neither do I get the print from Recognition Updated.
Hello, I need some help. I have been trying to use the “text to speech with custom options” node to send viseme data to a metahuman animation blueprint in order to manipulate the curve. But I can’t get it to work. The animation node is also always blank. Does someone know how to do this or is there a tutorial I can follow? Furthermore i have also tried to use the “SSML to Speech with custom options” node but it always fails. I tried to connect the spoken words string to the “Synthesis SSML” but that also fails. If someone knows how to send viseme data to the metahumans I would love to know.
Thank you for reading and thank you in advance.
I noticed that since the migration from 5.3 to 5.4 the function Convert .wav files to Soundwave is not working correctly anymore.
If you specifiy the outputModule as “Game” and add an AssetName it works fine, but clutters up the content browser.
If you dont specify an output Module and name to keep it transient, the audio cuts off after half a second. Is there a way to fix it or work around it, without keeping all the USoundWaves as assets?
Hello, I have relly hard time to package it for shipping mode and sitribution, developmetn mode works fine but shipping mode refuse to build.
AzureTest/Plugins/AzSpeech/Source/ThirdParty/AzureWrapper/libs/Win/Runtime/Microsoft.CognitiveServices.Speech.extension.codec.dll": 2 (The system cannot find the file specified.).
Packaging (Android (ASTC)): LogAzSpeech_Internal: Warning: LogLastError: Failed to load runtime library “C:/Users/Rene/Desktop/AzureTest/Plugins/AzSpeech/Source/ThirdParty/AzureWrapper/libs/Win/Runtime/Microsoft.CognitiveServices.Speech.extension.codec.dll”: 2 (The system cannot find the file specified.)
Same problem here. Tried on 5.3 and 5.4 but always crashes. Currently still working in development package though but we hope the developer would fix this soon (although we’re not sure if the developer is still actively maintaining the plugin since this problem is there for a long time already). We may need to switch to a different solution if the plugin is abandoned.
Same here, builds on some PCs are not working. I have some errors related to Microsoft.CognitiveServices.Speech.extension.codec.dll during the build that I can’t find a solution for.