Introducing Personica AI: A Cognitive NPC Brain for Unreal Engine

Personica_LogoWide

Product Website | Documentation | Community Discord

**SPECIAL OFFER: Anyone who purchases Personica AI within 30 days of its publishing date (January 3rd) will be provided an upgrade to a Pro version of Personica upon its release, free of charge for life.

Hi everyone! I’m Chris from SwampRabbit Labs, and I just released Personica AI on Fab.

Personica is a C++ genAI Brain component for Unreal Engine focused on one goal:
using LLM language processing to analyze and update game worlds, not just generate dialog.

This project came out of my own frustration experimenting with generative AI in games, and in noticing that so many products are focused on generating assets or characters, not making the work of creatives and designers easier. Most solutions I tried were great at talking, but fell apart when it came to:

  • Updating character behavior beyond dialog

  • Turning what characters think into actions that characters do

  • Interpreting new events and information in the game world

  • Multiplayer safety and determinism

So I built Personica as a hybrid system:

  • Utility AI handles decisions and actions

  • LLMs provide reasoning, dialogue, and interpretation

  • A ranked memory system decides what NPCs remember (and forget)

  • Built-in LODs and request gating keep things performant and predictable

BtbScreenshot2

What Personica Is & Isn’t

Personica is:

  • A gameplay-first AI architecture that focuses on keeping the LLM in the background for seamless, dynamic gaming, instead of trying to show off new AI gimmicks.

  • A “referee” for games: an LLM suggests updates, but the Personica system lets game developers control what actually gets let through to the game world.

  • Designed for RPGs, sims, immersive worlds, and systemic NPCs.

  • Built to work with designers, writers, and existing AI systems.

Personica is not:

  • A replacement for writers or voice actors.

  • A “ChatGPT wrapper,” pure character generator, or text-only plugin.

  • A black box that takes control away from your game logic.

The AI doesn’t “run the game” or overwrite your carefully-crafted storylines.
Instead, it interprets, remembers, suggests, and triggers explicit gameplay actions you define.

For more information, check out the Documentation link above.

Use Case Examples

  • Guard NPC behavior evolves over time
    • A town guard becomes more suspicious and aggressive toward the player after repeated nighttime trespassing, even if the dialogue stays polite.
  • Faction trust changes without dialog
    • Helping a rival faction silently lowers an NPC merchant’s prices for allies and raises them for enemies, without the NPC ever explaining why.
  • Quest outcomes alter NPC personality
    • Sparing an enemy causes them to become fearful and evasive later, while killing their ally makes them hostile on sight.
  • International relations analysis in Grand Strategies
    • After repeated border skirmishes and broken treaties, Personica mutates the diplomatic “Trust” state between two nations, causing future negotiations to start hostile even if the player offers generous terms.
  • Game systems that learn from previous events
    • The game world’s governing council “remembers” which policies historically stabilized the economy and begins favoring similar decisions autonomously.
  • NPC memory affects future gameplay options
    • An NPC refuses to help open a locked gate because they remember you previously betraying them during a side quest.
  • Dynamic quest gating without branching trees
    • A quest becomes unavailable because the NPC’s trust never reached the required threshold, not because the player chose a “wrong” option.

“But I can do all that already without Personica!”

Yes, and you should continue! Personica is designed to work alongside hand-scripted game design, not replace it completely. The plugin can take over for the tedious algorithm and branching construction required in traditional game design. Focus your time and energy on building the main storylines, tense actions points, and key systems, and spend less time tweaking algorithms and trigger rules for minor functionality that a player may never see.

Building a complex game world requires both wide and deep design. Personica takes over the “width” of game design so you can focus on the “depth.”

An example of an prompt that is sent to an LLM mid-conversation, with trait updates, conversation history, and memory system. Profiles and prompts can be customized to include more or less information.

Current State

  • v0.9.1 is live on Fab

  • A working demo showing dialog + utility actions using a local LLM is available for download

  • Designed to be extensible and modular (use only what parts of Personica you need)

This is an early release, but it’s already functional and actively being refined. My goal right now is real-world feedback from Unreal developers to ensure that I am building LLM tools that are useful, practical, and scalable.

I’m very open to feedback, positive or critical, and happy to answer technical questions about how the system works!

Thanks for taking a look,
Chris
SwampRabbit Labs

Announcing the Personica Founding Developer Program

As Personica AI moves toward its v1.0 roadmap, I’m opening up a limited Founding Developer program for a small group of Unreal developers who want to help shape the system during this early stage.

What this is

Selected developers will receive:

  • Free access to the current Base version of Personica

  • A guaranteed free upgrade to Personica Pro when it launches

  • Permanent “Founding Developer” status (recognition + future perks)

This is not a paid program, and there’s no obligation to ship a game using Personica. The goal is collaboration, validation, and real-world feedback.

What I’m looking for

This program is ideal if you:

  • Are actively prototyping or building an Unreal project

  • Want to experiment with AI-driven NPC behavior (dialog, memory, utility actions)

  • Are willing to provide honest feedback, bug reports, or suggestions

  • (Optional but very welcome) Want to create a small demo, video, or write-up showing how you’re using Personica

You do not need a large audience, studio backing, or marketing reach, solo devs and small teams are very welcome!

Why I’m doing this

Personica is designed to be a production-oriented system, not a novelty plugin. I want to get there by working closely with real developers and real use cases before locking in Pro features and long-term pricing.

Founding Developers will directly influence:

  • Pro-level features

  • Workflow and UX decisions

  • Documentation and examples

  • Multiplayer and performance best practices

How to apply

If you’re interested, submit this Google Form or message me with:

  • A brief description of what you’re working on

  • How you’d like to use or test Personica

  • (Optional) Links to past work, prototypes, or demos

I’ll be selecting a small, focused group to keep support manageable and feedback meaningful.

Thanks again for the interest and support; I’m looking forward to seeing what people build with this system.

— Chris
SwampRabbit Labs

(post deleted by author)

v0.9.1 Now Released!

Thank you all for your continued interest and discussions in how to use Personica AI! The first update is now released and includes a model and server for local LLMs, so you can truly plug and play with Personica.

Changelog:

  • Summary

    Added prepackaged local model (gemma-3-4b-it.Q4_K_M.gguf)

    • Note: If packaging with this model, you are REQUIRED to include the provided Notice.txt file alongside your packaged product.

    Added prepackaged versions of llama.cpp (the local server required for local models).

    • 3 Windows Versions:
      • CUDA: NVIDIA-specific high performance
      • Vulkan: Maximum GPU compatibility (NVIDIA & AMD)
      • CPU: Slow or background inference
    • 1 MacOS server
    • 1 Linux server (Vulkan)

    Added automatic detection of nvcuda.dll and fallback from CUDA to Vulkan if this DLL is not present (i.e., the card is AMD, not NVIDIA).

    Moved server configuration to Project Settings instead of LocalLLMConfig data asset.

    Added an optional Custom Server Executable Path in Project Settings which can be used to override the global defaults in Project Settings if you want to use a specific server version. Leave blank otherwise!

    Added support for global cloud LLM settings (global API keys), which can be set in Project Settings.

    Adjusted the layout and labeling of existing Project Settings sections.

*EDIT: To the applicant ‘Harwood31’ who applied for the Founding Developer program: You accidentally left the contact info field blank! Please DM me or re-submit so I can get the SDK over to you.