Very nice! Will be standing by for the UE plugin update (still new to UE, so waiting for the plugin makes more sense for me right now.).
The HttpGPT plugin that connects to ChatGPT to the engine is already updated and available. But because itâs something new, I havenât created documentation yet.
This sample project is a chat bot using both AzSpeech and HttpGPT: SpeechGPT
Itâs already using the new Chat API and allows performing both Speech to Text and Text to Speech.
However, you can already use and explore. : )
Ahh. Thanks for this clarification. Iâve been using a different API connector plugin.
However:
- I canât find HttpGPT in the marketplace. When I try to install it manually,
- I get the following errors: The following modules are missing or built with a different engine âSpeechGPTâ and when I click rebuild, I get âCould not be compiled. Try rebuilding from source manually.â
Iâm pretty new to all this, so forgive any ignorance on my part.
PS: This happens when I try the sample project and on my own projects as well. The HttpGPT plugin gives me the same error.
The HttpGPT isnât in the marketplace yet, can only be found on GitHub.
Youâre downloading from the releases page?
This button below doesnât work well with unreal projects
You can download the SpeechGPT via releases page:
Link: Releases (github.com)
Same with the plugins. : )
Edit 1:
Noticed that the .zip wasnât uploaded lol
Fixed:
Downloaded from the releases page and still getting the same error (even after redownloading the plugins). Is there something Iâm missing?
Using the marketplace plugin for my PCVR project, speech to text keeps failing randomly, then starts working again. No error whatsoever during runtime or packaging. It also fails for quest 2 projects packaged a long time ago, and gets fixed automatically later. No issues in my azure cognitive services deployment as per the troubleshooter.
This warning is appearing because the project needs to be compiled using Visual Studio 2019 or 2022 w/ the module Game Development with C++
The .zip file doesnât contain the binaries due to the large size the files would apply to the package + different platform SDKs
I uploaded a new .zip containing my already compiled project, can you try using it? : )
Link: SpeechGPT v1.1.0 (github.com)
File: SpeechGPT_Compiled_Win64.zip
Hello @alapacharya! : )
What kind of failure is occurring? Is it not recognizing phrases correctly (returning completely different phrases), stopping to recognize any speech, or are there errors and the task is canceled?
And could you enable Azure SDK logs, internal logs, and debugging logs, and send me the project and SDK logs from the moments when this problem occurs so that I can evaluate what might be happening?
You can send them to my email or privately if you prefer not to share here: contatolukevboas@gmail.com
Log location: PROJECT_DIR/Saved/Logs and PROJECT_DIR/Saved/Logs/AzSpeech
Iâll be glad to look for a solution to this problem as soon as I can! : )
UPDATE: Ignore this. I had run out of trial credits, so I had to create a payment plan on OpenAI.
Thanks for this. Decided to install Visual studio and compile the plugins, which worked well (I hope). Also downloaded your compiled project which started fine except I kept getting the error ârequest not sent.â I changed all the relevant API keys.
Got it, will share the relevant logs to you via mail when I encounter this bug again.
Hi, Iâm trying to make an android package, but failed, this is my settings
the error
and the whole log
UBT-IdolVerse-Android-Shipping.txt (50.2 KB)
Could you please help me
What an amazing plugin! Thanks for sharing.
Exploring its features and I see that the async node âtext to sound waveâ has âviseme receivedâ as a callback - however I donât see anyway to get the data from the callback - is this feature not available? Would LOVE to see this implemented if I am not just blind, as we would be able to control morph targets very easily!
Legend, mate. Thanks for this.
Hi! Iâll check this as soon as possible!
Hi! : )
You can enable Viseme in Project Settings â Plugins â AzSpeech â Enable Viseme:
And youâll be able to get the values using these getters:
I was unable to reproduce the error here. I tested with both versions of GitHub and Marketplace, as well as Engines 5.0 and 5.1, but I obtained the result: BUILD SUCCESSFUL in all attempts.
Could you provide me with the complete project log and some additional information such as the plugin version, engine version, system, SDK versions, Java and NDK versions, and whether you are using the marketplace or GitHub version?
One thing I noticed in the packaging log you sent me is that UBT could not find the pluginâs third-party libraries. Could you reinstall the plugin and test again?
AzSpeech v1.4.6
- Release: AzSpeech v1.4.6 (github.com)
- Marketplace: Waiting for approval
- Pull Request: v1.4.6 by lucoiso ¡ Pull Request #147 ¡ lucoiso/UEAzSpeech ¡ GitHub
Changes
- Add a new function to get the available content modules
- Add a check to avoid crashes if the user passes an invalid module in sound wave generatiors
- Improve the comments in sound wave generator functions
- Add new delegates to synthesis & recognition tasks: â⌠Failedâ
- Add a new setting to filter SSML Viseme Data if the type is set to FacialExpression
- Rename log file start from UEAzSpeech to AzSpeech
Gigachad. Thanks for doing godâs work.
Hello, itâs me again. Is this callback not supposed to be returning animation data in the form of a string that can be translated into morph target shapes? The documentation mentions that the animation var returns an xml or json string for blendshapes, but I am not getting anything like that back in the log - only get the viseme id and the offset. Docs also donât mention that this needs to be explicitly turned on however I believe thatâs what your âEnable Visemeâ flag does for us anyhowâŚ
Hereâs the log I am getting back - surely we arenât expected to map the viseme ids with our own morph target ranges, I assume the json contains the necessary data to draw the correct shapes on my end. Sorry if me stupid.
Of cource, my android sdk is like this
I work on windows10(x64), using UE5.0.3 with AzSpeech of version
I checked third-party libraries and find it in the right place. Maybe I should try update AzSpeech to 1.4.6
You need to use SSML to soundwav to get the animation(blendshape) data. And make sure you are feeding SSML format to the SSMG TO SOUNDWAV node.