The Problem
If you’ve tried using Claude Code, Cursor, or Gemini CLI on a game project, you’ve probably seen this: the AI reads your files one at a time, can’t follow .uasset or Blueprint references, and eventually hallucinates a dependency that doesn’t exist.
I watched Claude spend 40+ messages trying to figure out which classes my CombatManager actually affected. It was basically reading files alphabetically and guessing. Meanwhile I’m sitting there thinking “I could have just grep’d this faster.”
The real pain? Even when the AI finally gives you an answer, you can’t trust it. “CombatCore probably depends on PlayerManager…” — that “probably” cost me an afternoon of debugging
Why I Built gdep
So I built gdep (Game DEPendency analyzer). It’s a CLI tool & MCP server & web UI that scans your entire UE5/C++ project in under 0.5 seconds and gives your AI assistant a structural map of everything — class dependencies, call flows across C++→Blueprint boundaries, GAS ability chains, animator states, and unused assets.
Think of it as giving your AI a reconnaissance drone and a tactical map, instead of making it open doors one at a time.
Real-World Comparison: Same Question, Same Project
I tested both approaches on the same Lyra-based UE5 project :
Prompt: “Analyze this project and see how GAS is being used and Blueprint for yourself.”
Without gdep (2 min 10 sec):
- AI launched 2 Explore agents, used 56 tool calls reading files one by one
- Took 2 minutes 10 seconds
- Result: generic overview — “45+ C++ files dedicated to GAS”, vague categorization
- Blueprint analysis: just counted assets by folder (“6 Characters, 5 Game Systems, 13 Tools…”)
- No confidence rating, no asset coverage metrics
With gdep MCP (56 sec):
- AI made 3 MCP calls —
get_project_context→analyze_ue5_gas+analyze_ue5_blueprint_mappingin parallel - Took 56 seconds (2.3x faster)
- Result: structured analysis with confidence headers
Confidence: HIGH | 3910/3910 assets scanned (100%)- Every ability listed with its role, 35 GA Blueprints + 40 GE Blueprints + 20 AnimBlueprints mapped
- Tag distribution breakdown: Ability.* (30), GameplayCue.* (24), Gameplay.* (7)
- Blueprint→C++ parent mapping with K2 override counts per Blueprint
- Identified project-specific additions (zombie system) vs Lyra base automatically
Same AI, same project, same question. The difference is gdep gives the AI structured tools instead of making it grep through files.
What It Actually Does
Here’s what it answers in seconds:
- “What breaks if I change this class?” — Full impact analysis with reverse-trace across the project. Every result comes with a confidence rating (HIGH/MEDIUM/LOW) so you know what to trust.
- “Where is this ability actually called?” — Call flow tracing that crosses C++→Blueprint boundaries (UE5).
- “Are there assets nobody references?” — Unused asset detection UE5 binary path scanning.
- “What’s the code smell here?” — 19 engine-specific lint rules. Things like
GetComponentinUpdate(),SpawnActorinTick(), missingCommitAbility()in GAS abilities. - “Give my AI context about the project” —
gdep initgenerates anAGENTS.mdfile that any MCP-compatible AI reads automatically on startup.
It works as:
- 26 MCP tools for Claude Desktop, Cursor, Windsurf, or any MCP-compatible agent —
npm install -g gdep-mcp, add one JSON config, done. - 17 CLI commands for terminal use
- Web UI with 6 interactive tabs — class browser with inheritance chains, animated flow graph visualization, architecture health dashboard, engine-specific explorers (GAS, BehaviorTree, StateTree, Animator, Blueprint mapping), live file watcher, and an AI chat agent that calls tools against your actual code.
Measured performance:
- UE5: 0.46 seconds on a 2,800+ asset project (warm scan)
- Unity: 0.49 seconds on 900+ classes
- Peak memory: 28.5 MB
What gdep Is NOT
I want to be upfront about this:
- It’s not a magic wand. AI still can’t do everything, even with a full project map.
- It’s not an engine editor replacement. It gives AI a map and a recon drone — it doesn’t replace your IDE, your debugger, or your brain.
- It has confidence tiers for a reason. Binary asset scanning (like UE5
.uassetfiles) is MEDIUM confidence. Source code analysis is HIGH. gdep tells you this on every single result so you know when to double-check. - Delegating ALL your work to AI is still not appropriate. gdep helps AI understand most of the project, but “most” is not “all.” You still need to review, test, and think.
Privacy & Links
100% local. Everything runs on your machine. No telemetry, no cloud calls, no accounts, no analytics. If you’re suspicious, scan the entire codebase yourself — honestly there’s nothing to steal, and I really don’t want to go to jail.
Apache 2.0 — fully open source and free for commercial use.
- GitHub: GitHub - pirua-game/gdep: Game Codebase Analysis AI Agentic Tool · GitHub
- Install:
pip install gdep(CLI) /npm install -g gdep-mcp(MCP server) - Supported engines: Unity (C#) · UE5 (C++) · Axmol/Cocos2d-x (C++) · .NET · Generic C++
This tool has been genuinely useful for me, and I hope it helps other game developers who are trying to make AI coding assistants actually work with game projects. Would love to hear your feedback — issues, PRs, and honest criticism are all welcome.
If you want to see it in action, the Web UI has an interactive flow graph and class browser please check README
If it’s interesting, feel free to use it.
And if gdep seems good, please give me one github star.
Thank you.