[FREE] HttpGPT: GPT Integration (ChatGPT and DALL-E)

Hello the @lucoiso,

Love the plugin, very easy to use. :slight_smile:

Just got a crash, any idea what it could be?

Hello, I’m trying to connect this plugin to Azure OpenAI Services but it throws an error "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired."

I know the API Key is not wrong, maybe there is any problem when building the endpoint with all the parameters inside the blueprint, but I’m an Unity user and I don’t understand too much about the logic of this kind of programming. If anyone could check if the endpoint is well built inside the program, I would be really grateful:

**THE GENERAL ENDPOINT TO ACCESS AZURE OPENAI IS:**

POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}

THE PARAMETERS SPEECHGPT ASKS FOR AND I HAVE INTRODUCED:

ENDPOINT: https://xxxx.openai.azure.com/ (I can’t share the repository name)
MODEL (previously deployed in azure): gpt-4-32k
MODEL API VERSION: 2023-07-01-preview

And this is the log output after sending an user message and receiving an empty response.

LogHttpGPT_Internal: Display: SendRequest (): Request content body:
{
    "model": "gpt-4-32k",
    "max_tokens": 512
    "temperature": 1,
    "top_p": 1,
    "n": 1,
    "presence_penalty": 0,
    "frequency_penalty": 0,
    "stream": true,
    "user": "user",
    "messages": [
        {
            "role": "user",
            "content": "hola"
        }
    ]
}
LogHttpGPT_Internal: Display: OnProgressCompleted (): Process Completed
LogHttpGPT_Internal: Display: OnProgressCompleted (): Content: { "statusCode": 401, "message": "Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired." }
LogJson: Warning: Field id was not found.
LogJson: Error: Json Value of type 'Null' used as a 'String'.
LogJson: Warning: Field object was not found.
LogJson: Error: Json Value of type 'Null' used as a 'String'.
LogJson: Warning: Field created was not found.
LogJson: Error: Json Value of type 'Null' used as a 'Number'.
LogJson: Warning: Field choices was not found.
LogJson: Error: Json Value of type 'Null' used as a 'Array'.

Another problem I have, is that Speech to Text function doesn’t work. When I press the mic button and I start talking, it doesn’t do anything. I have tried to put some breakpoints, and I have realized that the asynchronous function never ends. Progress started works fine, but it never reaches progress completed. Bellow I have pasted the logs, and everything seems to work fine but when I debug it doesn’t work fine, and I can’t see any output…

LogAzSpeech: Display: Task: SpeechToText (); Function: Activate; Message: Activating task
LogAzSpeech_Internal: Display: Task: SpeechToText (); Function: StartAzureTaskWork; Message: Starting Azure SDK task
LogAzSpeech_Internal: Display: Task: SpeechToText (); Function: StartAzureTaskWork; Message: Using audio input device: Default
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: Init; Message: Initializing runnable thread
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: CanInitializeTask; Message: Checking if can initialize task in current context
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: Run; Message: Running runnable thread work
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: InitializeAzureObject; Message: Initializing Azure Object
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: InitializeAzureObject; Message: Creating recognizer object
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: CreateSpeechConfig; Message: Creating Azure SDK speech config
LogAzSpeech_Internal: Display: RegisterAzSpeechTask: Registering task SpeechToText (133390) in AzSpeech Engine Subsystem.
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: ApplySDKSettings; Message: Applying Azure SDK Settings
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_1; Function: EnableLogInConfiguration; Message: Enabling Azure SDK log
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: InsertProfanityFilterProperty; Message: Adding profanity filter property
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: InsertLanguageIdentificationProperty; Message: Adding language identification property
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: GetCandidateLanguages; Message: Getting candidate languages
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: GetCandidateLanguages; Message: Using language es-ES as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: GetCandidateLanguages; Message: Using language de-DE as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_133390; Function: GetCandidateLanguages; Message: Using language fr-FR as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: GetCandidateLanguages; Message: Using language el-GR as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_133390; Function: GetCandidateLanguages; Message: Using language it-IT as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: GetCandidateLanguages; Message: Using language ja-JP as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_133390; Function: GetCandidateLanguages; Message: Using language ko-KR as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: GetCandidateLanguages; Message: Using language zh-CN as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_133390; Function: GetCandidateLanguages; Message: Using language pt-BR as candidate
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: Run; Message: Starting recognition
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: Run; Message: Recognition started.
LogAzSpeech: Display: Task: SpeechToText (); Function: StopAzSpeechTask; Message: Stopping task
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_133390; Function: StopAzSpeechRunnableTask; Message: Setting runnable work as pending stop
LogAzSpeech_Internal: Display: UnregisterAzSpeechTask: Unregistering task SpeechToText () from AzSpeech Engine Subsystem.
LogAzSpeech: Display: Task: SpeechToText (); Function: SetReadyToDestroy; Message: Setting task as Ready to Destroy
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: StopAzSpeechRunnableTask; Message: Setting runnable work as pending stop
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: Stop; Message: Stopping runnable thread work
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: Exit; Message: Exiting thread
LogAzSpeech_Internal: Display: Thread: AzSpeech_SpeechToText_; Function: ~FAzSpeechRunnableBase; Message: Destructing runnable thread

I hope anyone can help me with these two problems, I need to solve them for an important project. Thank you all in advance!! (and sorry for my bad English)

Any chance of getting the latest GPT4-Turbo 1106 and GPT3.5-Turbo 1106 Models that came out today in this plugin? :slight_smile:

I seem to only have these options in my project plugin settings?

On PC, am I missing something? I really want to add in my API key but I can’t?

Also you can upload images and get text responses, right? I’m trying to have it describe the scene to an agent…

Thank you so much for making this Plug In, it’s super helpful!
I wonder if we can add a feature to direct it specifically to ‘Assistant API’, which we can create in OpenAI?
https://platform.openai.com/docs/api-reference/assistants/createAssistant

@lucoiso Will it work with Gemini API too ?

Hey, It can only be installed to 5.4 but then when I turn it on it says that the plugin is built for 5.3 and the project doesn’t open. I can’t install the plugin to 5.3 as well. Not possible to use it at the moment. Please Help

its still not working any updates on this?

I never did share the outcome of project back in 5.3 using this plugin.

Here it is. Chat GPT narrates, totally hands free and dynamic.