Community Tutorial: Ollama AI Integration in Unreal Engine - Local Chat Tutorial

This tutorial covers how to integrate Ollama into your Unreal Engine projects using the Runtime AI Chatbot Integrator plugin. Unlike cloud-based AI providers, Ollama runs entirely on your local machine and requires no API token or internet connection. You’ll learn how to discover installed models at runtime, send standard chat requests for complete responses, and use streaming requests to deliver text chunk by chunk as the model generates it. The tutorial also covers error handling and request cancellation. All workflows are shown in Blueprint. Ollama supports a wide range of open-source models including Gemma, Llama, Mistral, and more, making it a flexible and cost-free option for AI-driven NPC dialogue and other in-game chat features.

https://dev.epicgames.com/community/learning/tutorials/X7KV/fab-ollama-ai-integration-in-unreal-engine-local-chat-tutorial