Introducing Personica AI: A Cognitive NPC Brain for Unreal Engine

Personica_LogoWide

Product Website | Documentation | Community Discord

Hi everyone! I’m Chris from SwampRabbit Labs, and I just released Personica AI on Fab.

Personica is a C++ genAI Brain component for Unreal Engine focused on one goal:
using LLM language processing to analyze and update game worlds, not just generate dialog.

This project came out of my own frustration experimenting with generative AI in games, and in noticing that so many products are focused on generating assets or characters, not making the work of creatives and designers easier. Most solutions I tried were great at talking, but fell apart when it came to:

  • Updating character behavior beyond dialog

  • Turning what characters think into actions that characters do

  • Interpreting new events and information in the game world

  • Multiplayer safety and determinism

So I built Personica as a hybrid system:

  • LLM responses work alongside your traditional game design. Use as an all-in-one system, or modularly with your preferred AI kits.

  • Use generative AI’s inherent language processing to reason, think, and act.

  • A ranked memory system decides what NPCs remember (and forget)

  • Built-in LODs and request gating keep things performant and predictable

BtbScreenshot2

What Personica Is & Isn’t

Personica is:

  • A gameplay-first AI architecture that focuses on keeping the LLM in the background for seamless, dynamic gaming, instead of trying to show off new AI gimmicks.

  • A “referee” for games: an LLM suggests updates, but the Personica system lets game developers control what actually gets let through to the game world.

  • Designed for RPGs, sims, immersive worlds, and systemic NPCs.

  • Built to work with designers, writers, and existing AI systems.

Personica is not:

  • A replacement for writers or voice actors.

  • A “ChatGPT wrapper,” pure character generator, or text-only plugin.

  • A black box that takes control away from your game logic.

The AI doesn’t “run the game” or overwrite your carefully-crafted storylines.
Instead, it interprets, remembers, suggests, and triggers explicit gameplay actions you define.

For more information, check out the Documentation link above.

Use Case Examples

  • Guard NPC behavior evolves over time
    • A town guard becomes more suspicious and aggressive toward the player after repeated nighttime trespassing, even if the dialogue stays polite.
  • Faction trust changes without dialog
    • Helping a rival faction silently lowers an NPC merchant’s prices for allies and raises them for enemies, without the NPC ever explaining why.
  • Quest outcomes alter NPC personality
    • Sparing an enemy causes them to become fearful and evasive later, while killing their ally makes them hostile on sight.
  • International relations analysis in Grand Strategies
    • After repeated border skirmishes and broken treaties, Personica mutates the diplomatic “Trust” state between two nations, causing future negotiations to start hostile even if the player offers generous terms.
  • Game systems that learn from previous events
    • The game world’s governing council “remembers” which policies historically stabilized the economy and begins favoring similar decisions autonomously.
  • NPC memory affects future gameplay options
    • An NPC refuses to help open a locked gate because they remember you previously betraying them during a side quest.
  • Dynamic quest gating without branching trees
    • A quest becomes unavailable because the NPC’s trust never reached the required threshold, not because the player chose a “wrong” option.

“But I can do all that already without Personica!”

Yes, and you should continue to! Personica is designed to work alongside hand-scripted game design, not replace it completely. The plugin can take over for the tedious algorithm and branching construction required in traditional game design. Focus your time and energy on building the main storylines, tense actions points, and key systems, and spend less time tweaking algorithms and trigger rules for minor functionality that a player may never see.

Building a complex game world requires both wide and deep design. Personica takes over the “width” of game design so you can focus on the “depth.”

An example of an prompt that is sent to an LLM mid-conversation, with trait updates, conversation history, and memory system. Profiles and prompts can be customized to include more or less information.

Current State

  • v0.9.2 is live on Fab

  • A working demo showing dialog + utility actions using a local LLM is available for download

  • Designed to be extensible and modular (use only what parts of Personica you need)

My goal right now is real-world feedback from Unreal developers to ensure that I am building LLM tools that are useful, practical, and scalable.

I’m very open to feedback, positive or critical, and happy to answer technical questions about how the system works!

Thanks for taking a look,
Chris
SwampRabbit Labs

Announcing the Personica Founding Developer Program

As Personica AI moves toward its v1.0 roadmap, I’m opening up a limited Founding Developer program for a small group of Unreal developers who want to help shape the system during this early stage.

What this is

Selected developers will receive:

  • Free access to the current Base version of Personica

  • A guaranteed free upgrade to Personica Pro when it launches

  • Permanent “Founding Developer” status (recognition + future perks)

This is not a paid program, and there’s no obligation to ship a game using Personica. The goal is collaboration, validation, and real-world feedback.

What I’m looking for

This program is ideal if you:

  • Are actively prototyping or building an Unreal project

  • Want to experiment with AI-driven NPC behavior (dialog, memory, utility actions)

  • Are willing to provide honest feedback, bug reports, or suggestions

  • (Optional but very welcome) Want to create a small demo, video, or write-up showing how you’re using Personica

You do not need a large audience, studio backing, or marketing reach, solo devs and small teams are very welcome!

Why I’m doing this

Personica is designed to be a production-oriented system, not a novelty plugin. I want to get there by working closely with real developers and real use cases before locking in Pro features and long-term pricing.

Founding Developers will directly influence:

  • Pro-level features

  • Workflow and UX decisions

  • Documentation and examples

  • Multiplayer and performance best practices

How to apply

If you’re interested, submit this Google Form or message me with:

  • A brief description of what you’re working on

  • How you’d like to use or test Personica

  • (Optional) Links to past work, prototypes, or demos

I’ll be selecting a small, focused group to keep support manageable and feedback meaningful.

Thanks again for the interest and support; I’m looking forward to seeing what people build with this system.

— Chris
SwampRabbit Labs

v0.9.1 Now Released!

Thank you all for your continued interest and discussions in how to use Personica AI! The first update is now released and includes a model and server for local LLMs, so you can truly plug and play with Personica.

Changelog:

  • Summary

    Added prepackaged local model (gemma-3-4b-it.Q4_K_M.gguf)

    • Note: If packaging with this model, you are REQUIRED to include the provided Notice.txt file alongside your packaged product.

    Added prepackaged versions of llama.cpp (the local server required for local models).

    • 3 Windows Versions:
      • CUDA: NVIDIA-specific high performance
      • Vulkan: Maximum GPU compatibility (NVIDIA & AMD)
      • CPU: Slow or background inference
    • 1 MacOS server
    • 1 Linux server (Vulkan)

    Added automatic detection of nvcuda.dll and fallback from CUDA to Vulkan if this DLL is not present (i.e., the card is AMD, not NVIDIA).

    Moved server configuration to Project Settings instead of LocalLLMConfig data asset.

    Added an optional Custom Server Executable Path in Project Settings which can be used to override the global defaults in Project Settings if you want to use a specific server version. Leave blank otherwise!

    Added support for global cloud LLM settings (global API keys), which can be set in Project Settings.

    Adjusted the layout and labeling of existing Project Settings sections.

*EDIT: To the applicant ‘Harwood31’ who applied for the Founding Developer program: You accidentally left the contact info field blank! Please DM me or re-submit so I can get the SDK over to you.

v0.9.2 Now Available!

v0.9.2 of Personica AI is now available for update or purchase, featuring additions and updates to the Volition Engine, Prompt Template, and visual Debugger.

  • Changelog
    • Implemented the second part of the Volition Engine, introducing an autonomy system.
      • Personica Brain Component → Details → Autonomy Template
      • Provide a Prompt Template to be sent to the LLM to determine ongoing actions.
        • Use {GOAL} to inject the Current Auto Goal (see below).
        • Bind to OnGetWorldContext to write to {SITUATION}
        • Add any relevant dynamic tags (see below)
      • Max Action Retries: the number of times that the brain will try to execute a failed action.
      • Enable Auto Loop: If true, the Personica Brain Component automatically calls the LLM to generate a new response once their Action Plan is exhausted.
      • Current Auto Goal: The ongoing goal for the brain to strive for.
      • On Utility Action EventGraphs: add an Advance Plan action to the Blueprint of a Utility Action to enable the action for use in the Autonomy system.
        • Last Action Successful: Use to differentiate between success and fail paths in a Utility Action’s Blueprint.
        • Allow Retry: If checked, the brain will automatically retry the action. References Max Action Retries (see above).
    • Action Plan System
      • Now, instead of only generating a single Utility Action to perform, the LLM can generate a list of up to 10 utility actions to queue and execute in order.
      • In Personica Brain Component → Details → Utility:
      • Brain Mode:
        • Hybrid: LLM makes utility action decisions, but utility score can override if above the Reflex Threshold (see below)
        • Utility Only: Character uses traditional utility AI scoring to determine actions.
        • Passive: Character is passive unless Request Decision or Execute Action are specifically called.
      • Reflex Threshold: The utility score value above which a utility action will interrupt the LLM’s Action Plan and execute.
      • Abort Plan on Failure: When checked, if the current Utility Action fails, the Action Plan is cleared and the character waits for another LLM response to determine a new plan. If unchecked, the failed Utility Action will be skipped and the next Utility Action in the plan will be executed instead.
      • Planning Step Count: The maximum number of Utility Actions in the character’s Action Plan. The absolute max for this is 10. Set to a lower value to reduce response time.
      • Queue Refill Threshold: If the number of remaining Utility Actions in the Action Plan equals this, the Personica Brain Component marks itself as ready for another LLM ping. If set to -1, the Brain Component will not automatically refill (Request Decision has to be explicitly called).
    • Action Set System
      • Assign a tag (Like Actions.Combat) to a Utility Action data asset (Details → Filtering)
      • In a brain component, assign one or more Active Action Sets (Personica Brain Component → Utility)
      • Now, the {AVAILABLE_ACTIONS} array will only list utility actions with the specified Action Set tag(s).
      • Meaning the LLM will only choose from actions that you make possible (no chance of a character sleeping while on fire due to a bad hallucination).
      • Example: Assign Actions.Combat to a character when entering combat, and if their health score drops below a certain threshold, add Actions.LowHealth to this list. Remove LowHealth if their health score increases.
      • If no Action Sets are implemented on a Personica Brain Component, all Utility Actions assigned to the character will be presented to the LLM as available actions.
    • Sorting actions in LLM prompt by utility score
      • Now, when creating the {AVAILABLE_ACTIONS} array, Personica automatically sorts the actions by utility score (highest to lowest). The LLM is more likely to choose actions high on a list, so it will be more likely to select actions with high utility scores, while not being locked into any one option.
    • Spatial sum consideration for Utility Actions (Useful for AoE utility decisions)
      • Use to modify the utility score of a Utility Action based on the congestion in a given area.
      • Examples: Use to boost the value of throwing a health potion at a group of allies. Or, invert to have a stealthy rogue avoid a crowded room when sneaking.
    • Dynamic tag injection: use custom JSON tags in prompts and bind to the event “On Resolve Context Tag” to inject dynamic information into your prompts.
      • Brain Component → Class Settings → Implemented Interfaces → Add “Personica Context Interface”
      • In the Interfaces panel on the left, open “GetContextForTag”
      • Drag out from the “Tag” pin and create a Switch on String
      • Add pin(s) containing dynamic tags (i.e., SHOP_STATUS, FACTION_RELATION, etc.)
      • Connect each tag to the value that should replace it in the LLM prompt.
      • Example: a Faction Relation score is lowered when the player is caught stealing from that faction, which in turn updates the {FACTION_RELATION} tag sent to the faction leader’s Personica Brain Component.
    • The provided key presets on the Prompt Template (for reasoning, memory, dialog, and utility) can be deleted/left blank to remove that key from being sent to the LLM. For instance, remove the memory and dialog keys for a character that just needs to decide actions. Reduces latency (LLM spends less time generating information you do not need).
    • Allowed editing the Prompt Template JSON Format Definition field, previously read-only. Use this to add dynamic tags or modify the help text for the key presets.
    • Debugger update: overhauled UI, improved text wrapping, access to both prompts AND responses for each Personica Brain.
      • Debugger is constantly being re-evaluated to provide useful Personica-related information. Please reach out directly if you have certain data that you would really like to see when debugging!
    • Added a helper to force llama server executable permissions on Mac and Linux.
    • Updated provided llama.cpp servers to their latest release (b7941).
      -Note: There appears to be a known performance issue with Qwen 3 models in recent releases. We will patch the server files as updates come in to fix this.

What To Change/Not Change

  • Existing utility actions should work with this update.
  • The previous method using ProcessPlaceholders to insert dynamic tags into prompts can still work, but using the Personica Context Interface is highly recommended and will be supported more in the future.

Renamings:

  • “Prompt Template” on the Brain Component was renamed to Dialog Prompt Template, in order to differentiate from the Autonomy Template, which uses the same Data Asset Type. Dialog Prompt Templates are called with RequestLLMThought, while Autonomy Templates are called with RequestDecision.
  • “Live Dialog Updates” boolean on the Brain Component was renamed to “Execute Utility and Dialog Quickly” to better reflect its function. If this flag is checked, dialog or utility action keys (like {dialog_line}, or whatever you specified in your prompt template) will be passed through any configured safety filters as soon as they are received and applied to the game world. If unchecked, the dialog and utility keys wait until the full response is received to be passed into the game world.

(post deleted by author)

Announcing the Pioneer Sale: Get Personica AI for 50% Off Through March 1st!

Interested in exploring Personica AI? For a limited time, you can purchase Personica AI for 50% off on FAB! Get a dedicated LLM integration framework, powerful configuration options, and hybrid genAI/traditional-weighted utility AI system for half its usual price, and for a quarter of the costs of other leading utility AI systems.

Help Shape Meaningful LLM Integration in Gaming

By joining the Personica community at this early stage, users can help shape the direction of the plugin and ensure that Personica fulfills its goal of working across genres, formats, and operating systems. We consider and evaluate every point of feedback we receive!

Individualized Support & Service

Becoming a Personica user is different from other AI applications you may be familiar with; enjoy personalized support & service instead of megalithic corporate processes. We strive to answer any and all support questions, using human customer service and human knowledge. Our products may be AI-based, but our support services are not!

Save Time and Bandwidth

Save weeks of development time setting up your own custom LLM system, and gain unlimited access to an existing development kit that is being tested and improved by over two dozen users for just $40 USD. Built with organization & ease of use in mind, Personica comes with a robust and growing tooltip and documentation system that brings users up to speed quickly on the specific configuration options that Personica provides. Once up to speed, use the time previously spent building tedious dialog branches or behavior trees to instead craft deeper story lines, fuller mechanics, and a higher-quality game world overall.

Interested in Becoming a Pioneer?

Purchase Personica AI on FAB through March 1st, 2026 to join our growing product community!