Two2StarsInk - Unreal AI Copilot

Your AI-powered development assistant — right inside the Unreal Editor.

Chat with an AI that can read, create, and modify Blueprints, materials, assets, actors, levels, lighting, procedural geometry, environment volumes, and splines. It sees your viewport, knows what you've selected, controls the editor (PIE, undo/redo, console commands), and executes multi-step tasks autonomously — all from a familiar dockable chat panel.

Version: 1.0  |  Engine: Unreal Engine 5.5+  |  Platforms: Windows 64-bit (Mac/Linux supported)

Setup tutorial video: Unreal AI Copilot Setup Tutorial
More info and guides at Unreal AI Copilot homepage

Why Unreal AI Copilot?

Building in Unreal Engine means juggling dozens of editor workflows — Blueprints, materials, actor placement, level setup, lighting, property tweaks. Every task requires navigating menus, panels, and details views.

Unreal AI Copilot lets you describe what you want in plain English and watches it happen. The AI understands your project structure, sees your viewport, and uses 23 specialized editor tools with ~177 sub-actions to carry out complex multi-step operations — from wiring Blueprint graphs to authoring materials to building procedural geometry to configuring environment volumes.

It's not a code generator that dumps text into your clipboard. It's an autonomous agent that operates directly inside the editor, plans its approach, executes step by step, and reports back.

Talk to Your Editor

Type natural language in the chat panel. The AI streams its response in real-time and takes action using built-in editor tools. No copy-pasting, no manual steps.

Plan → Execute Architecture

Complex requests are automatically broken into discrete tasks. The AI first discovers your project state with read-only tools, creates a plan, then executes each task sequentially — with full visibility into progress.

Requirements

  • Unreal Engine 5.5 or later

  • An API key for at least one LLM provider (or a running local model server)

  • Windows 64-bit (Mac and Linux are supported but less tested)

Quick Start:

1. Install the Plugin

Install from the Fab Marketplace. The plugin appears at Plugins/UnrealAICopilot/ and is enabled by default.

2. Set Your API Key

API keys are read from environment variables — they are never written to project config files.

Windows (recommended — persists across reboots):

setx OPENAI_API_KEY "sk-..."

Then restart the editor for the variable to take effect.

Windows (GUI method):

  1. Press Win + R, type sysdm.cpl, press Enter.

  2. Go to AdvancedEnvironment Variables…

  3. Under User variables, click New…

  4. Variable name: OPENAI_API_KEY — Variable value: sk-...

  5. Click OK, restart the editor.

Linux / macOS:

# Add to ~/.bashrc, ~/.zshrc, or ~/.profile: export OPENAI_API_KEY="sk-..."

Restart your terminal and editor.

3. Open the Chat Panel
  • Menu: Tools → AI Copilot

  • Toolbar: Click the AI Copilot button in the main editor toolbar

4. Start Chatting

Type a message and press Enter. The AI responds in real-time.

Configuration

All settings are accessible via Project Settings → Plugins → AI Copilot.

API Key Security

API keys are never written to project config files. They are read at runtime from the environment variable specified in settings. This prevents accidental commits to source control.

Usage Guide

Basic Chat
  1. Open the chat panel (Tools → AI Copilot, or the toolbar button).

  2. Type a message and press Enter (Shift+Enter for newline).

  3. The AI responds in real-time with streaming text.

  4. Click Clear to reset the conversation.

  5. Click Cancel to abort an in-flight request.

Pinned Assets

Right-click any asset in the Content Browser and select "Add to AI Context". Pinned assets appear as chips above the chat input, giving the AI persistent awareness of the assets you're working with.

Viewport Screenshots

If Auto Include Viewport is enabled, a screenshot is attached to every message automatically. Otherwise, the AI captures the viewport on demand when it needs visual context.

Requires a vision-capable model (e.g., GPT-4o, Claude Sonnet 4).

Conversation History

Click the history toggle button to open the side panel. Load or delete saved conversations. History is stored locally in Saved/AICopilot/Conversations/.

I can’t seem to get it to work with github copilot. I’ve gone through the steps and tried everything I can think of but I just get a 400 error message of bad header

Hello!

That error usually means the API key isn’t being picked up by the environment. Make sure you’ve followed the setup guide to add your key correctly — the documentation includes step‑by‑step instructions here: Documentation — Unreal AI Copilot

There’s also a first‑time setup video on the same page that walks through the entire process, which can help confirm everything is configured properly.

If you still run into issues after that, feel free to share more details and I’ll help you get it sorted.

Hello!

That error usually means the API key isn’t being picked up by the environment. Make sure you’ve followed the setup guide to add your key correctly — the documentation includes step‑by‑step instructions here: Documentation — Unreal AI Copilot

There’s also a first‑time setup video on the same page that walks through the entire process, which can help confirm everything is configured properly.

If you still run into issues after that, feel free to share more details and I’ll help you get it sorted.

I am getting an issue with OPEN AI Error: LLM Error: Failed to parse OpenAI response JSON. None of the models are working and some say message": “Unsupported parameter: ‘max_tokens’ is not supported with this model. Use ‘max_completion_tokens’ instead.”,
“type”: “invalid_request_error”,
“param”: “max_tokens”,
“code”: “unsupported_parameter”

hello, i’ll look into it and get back to you as soon as possible, thank you for your feedback

1.5.1

Latest - 2026-04-15

Fixed

  • max_tokens unsupported parameter error — Newer OpenAI models (GPT-4o, GPT-4 Turbo, o-series, GPT-5) reject the max_tokens parameter and require max_completion_tokensinstead. The provider now detects the model and sends the correct parameter name. Legacy models (GPT-3.5, base GPT-4) continue to use max_tokens.

  • Reasoning models sending unsupported temperature — o1, o3, and o4 reasoning models do not support the temperature parameter. It is now omitted for these models.

  • HTTP 404 for Responses API models — Models like gpt-5.3-codex and gpt-5.2-codexrequire the /v1/responses endpoint instead of /v1/chat/completions. The provider now auto-detects the required API type and routes requests to the correct endpoint.

  • “Failed to parse OpenAI response JSON” with GPT-5.4 — GPT-5.4 (and potentially other newer models) return SSE streaming chunks (data: {...}) even when stream: false is explicitly set. The non-streaming response parser now detects SSE format and automatically reassembles the chunks into a complete response, including content concatenation, streamed tool call accumulation (by index), usage extraction, and finish reason capture.

  • stream field not sent when false — The request body only included "stream": truewhen streaming was enabled but never explicitly sent "stream": false. Now always sets the field to prevent models from defaulting to streaming.

Added

  • Responses API (/v1/responses) support — Full request/response handling for OpenAI models that use the Responses API. Builds the input/instructions/function_call_outputrequest format and parses the output array response format with message andfunction_call items.

  • SSE response reassembly — When the API returns streaming chunks despite stream: false, the provider automatically detects the data: prefix and reassembles all chunks into a single FAICopilotLLMResponse. Handles incremental content deltas, streamed tool calls with index-based accumulation, usage stats, and finish reason.

  • Model API type detection — New EOpenAIApiType enum (ChatCompletions, Responses,Unsupported) with automatic routing based on model ID.

  • Clear error for completions-only models — Non-chat models (e.g. gpt-5-codex,davinci) now return a descriptive error message suggesting chat-capable alternatives instead of a cryptic HTTP 404.

  • GPT-5 context window support — GPT-5 family models mapped to 200K token context window. Vision support extended to GPT-5 models.

  • Response body diagnostic logging — Parse failure error messages now include body length and the first 500 characters. Verbose-level logging shows full response details for all requests.