Integrating a customized GPT (under MyGPTs) into an Unreal Engine 5 VR project for real-time interaction can be broken down into several key components:
1. Connecting GPT with Unreal Engine
REST API Approach:
OpenAI provides an API for GPT models. You can send requests from Unreal Engine using:
UE5’s HTTP module (FHttpModule) to send/receive data.
Blueprints (WebSocket or HTTP Requests) if you prefer a no-code approach.
Python Plugin for UE to communicate with the OpenAI API.
Local Model (Optional):
If you want to run a local LLM (e.g., Llama, Mistral), consider:
Running Ollama locally and using Unreal’s Python scripting for communication.
Hi! I followed the tutorial on Metahuman AI lip-sync (MetaHuman AI Lip Sync with Local Text-to-Speech in UE 5.6+ | Community tutorial ) *with local text-to-speech in UE 5.6. My current setup can capture microphone input and display it as text, and the character can speak from a text field. What I’m missing is connecting this text to a GPT model to generate dynamic responses. Could you guide me on best practices for integrating a custom GPT (local or cloud) so the generated text feeds directly into the Metahuman TTS workflow in UE5 VR?
can you show how to take the text that is converted from the VR microphone and generate a response using any gpt local or API based?
Thank you!*