Introducing Personica AI: A Cognitive NPC Brain for Unreal Engine

Personica_LogoWide

Product Website | Documentation | Community Discord

Hi everyone! I’m Chris from SwampRabbit Labs, and I just released Personica AI on Fab.

Personica is a C++ genAI Brain component for Unreal Engine focused on one goal:
using LLM language processing to analyze and update game worlds, not just generate dialog.

This project came out of my own frustration experimenting with generative AI in games, and in noticing that so many products are focused on generating assets or characters, not making the work of creatives and designers easier. Most solutions I tried were great at talking, but fell apart when it came to:

  • Updating character behavior beyond dialog

  • Turning what characters think into actions that characters do

  • Interpreting new events and information in the game world

  • Multiplayer safety and determinism

So I built Personica as a hybrid system:

  • LLM responses work alongside your traditional game design. Use as an all-in-one system, or modularly with your preferred AI kits.

  • Use generative AI’s inherent language processing to reason, think, and act.

  • A ranked memory system decides what NPCs remember (and forget)

  • Built-in LODs and request gating keep things performant and predictable

BtbScreenshot2

What Personica Is & Isn’t

Personica is:

  • A gameplay-first AI architecture that focuses on keeping the LLM in the background for seamless, dynamic gaming, instead of trying to show off new AI gimmicks.

  • A “referee” for games: an LLM suggests updates, but the Personica system lets game developers control what actually gets let through to the game world.

  • Designed for RPGs, sims, immersive worlds, and systemic NPCs.

  • Built to work with designers, writers, and existing AI systems.

Personica is not:

  • A replacement for writers or voice actors.

  • A “ChatGPT wrapper,” pure character generator, or text-only plugin.

  • A black box that takes control away from your game logic.

The AI doesn’t “run the game” or overwrite your carefully-crafted storylines.
Instead, it interprets, remembers, suggests, and triggers explicit gameplay actions you define.

For more information, check out the Documentation link above.

Use Case Examples

  • Guard NPC behavior evolves over time
    • A town guard becomes more suspicious and aggressive toward the player after repeated nighttime trespassing, even if the dialogue stays polite.
  • Faction trust changes without dialog
    • Helping a rival faction silently lowers an NPC merchant’s prices for allies and raises them for enemies, without the NPC ever explaining why.
  • Quest outcomes alter NPC personality
    • Sparing an enemy causes them to become fearful and evasive later, while killing their ally makes them hostile on sight.
  • International relations analysis in Grand Strategies
    • After repeated border skirmishes and broken treaties, Personica mutates the diplomatic “Trust” state between two nations, causing future negotiations to start hostile even if the player offers generous terms.
  • Game systems that learn from previous events
    • The game world’s governing council “remembers” which policies historically stabilized the economy and begins favoring similar decisions autonomously.
  • NPC memory affects future gameplay options
    • An NPC refuses to help open a locked gate because they remember you previously betraying them during a side quest.
  • Dynamic quest gating without branching trees
    • A quest becomes unavailable because the NPC’s trust never reached the required threshold, not because the player chose a “wrong” option.

“But I can do all that already without Personica!”

Yes, and you should continue to! Personica is designed to work alongside hand-scripted game design, not replace it completely. The plugin can take over for the tedious algorithm and branching construction required in traditional game design. Focus your time and energy on building the main storylines, tense actions points, and key systems, and spend less time tweaking algorithms and trigger rules for minor functionality that a player may never see.

Building a complex game world requires both wide and deep design. Personica takes over the “width” of game design so you can focus on the “depth.”

An example of an prompt that is sent to an LLM mid-conversation, with trait updates, conversation history, and memory system. Profiles and prompts can be customized to include more or less information.

Current State

  • v0.9.3 is live on Fab

  • Free downloadable demos showing dialog + utility actions using a local LLM is available for download

  • Designed to be extensible and modular (use only what parts of Personica you need)

My goal right now is real-world feedback from Unreal developers to ensure that I am building LLM tools that are useful, practical, and scalable.

I’m very open to feedback, positive or critical, and happy to answer technical questions about how the system works!

Thanks for taking a look,
Chris
SwampRabbit Labs

Announcing the Personica Founding Developer Program

As Personica AI moves toward its v1.0 roadmap, I’m opening up a limited Founding Developer program for a small group of Unreal developers who want to help shape the system during this early stage.

What this is

Selected developers will receive:

  • Free access to the current Base version of Personica

  • A guaranteed free upgrade to Personica Pro when it launches

  • Permanent “Founding Developer” status (recognition + future perks)

This is not a paid program, and there’s no obligation to ship a game using Personica. The goal is collaboration, validation, and real-world feedback.

What I’m looking for

This program is ideal if you:

  • Are actively prototyping or building an Unreal project

  • Want to experiment with AI-driven NPC behavior (dialog, memory, utility actions)

  • Are willing to provide honest feedback, bug reports, or suggestions

  • (Optional but very welcome) Want to create a small demo, video, or write-up showing how you’re using Personica

You do not need a large audience, studio backing, or marketing reach, solo devs and small teams are very welcome!

Why I’m doing this

Personica is designed to be a production-oriented system, not a novelty plugin. I want to get there by working closely with real developers and real use cases before locking in Pro features and long-term pricing.

Founding Developers will directly influence:

  • Pro-level features

  • Workflow and UX decisions

  • Documentation and examples

  • Multiplayer and performance best practices

How to apply

If you’re interested, submit this Google Form or message me with:

  • A brief description of what you’re working on

  • How you’d like to use or test Personica

  • (Optional) Links to past work, prototypes, or demos

I’ll be selecting a small, focused group to keep support manageable and feedback meaningful.

Thanks again for the interest and support; I’m looking forward to seeing what people build with this system.

— Chris
SwampRabbit Labs

v0.9.1 Now Released!

Thank you all for your continued interest and discussions in how to use Personica AI! The first update is now released and includes a model and server for local LLMs, so you can truly plug and play with Personica.

Changelog:

  • Summary

    Added prepackaged local model (gemma-3-4b-it.Q4_K_M.gguf)

    • Note: If packaging with this model, you are REQUIRED to include the provided Notice.txt file alongside your packaged product.

    Added prepackaged versions of llama.cpp (the local server required for local models).

    • 3 Windows Versions:
      • CUDA: NVIDIA-specific high performance
      • Vulkan: Maximum GPU compatibility (NVIDIA & AMD)
      • CPU: Slow or background inference
    • 1 MacOS server
    • 1 Linux server (Vulkan)

    Added automatic detection of nvcuda.dll and fallback from CUDA to Vulkan if this DLL is not present (i.e., the card is AMD, not NVIDIA).

    Moved server configuration to Project Settings instead of LocalLLMConfig data asset.

    Added an optional Custom Server Executable Path in Project Settings which can be used to override the global defaults in Project Settings if you want to use a specific server version. Leave blank otherwise!

    Added support for global cloud LLM settings (global API keys), which can be set in Project Settings.

    Adjusted the layout and labeling of existing Project Settings sections.

*EDIT: To the applicant ‘Harwood31’ who applied for the Founding Developer program: You accidentally left the contact info field blank! Please DM me or re-submit so I can get the SDK over to you.

v0.9.2 Now Available!

v0.9.2 of Personica AI is now available for update or purchase, featuring additions and updates to the Volition Engine, Prompt Template, and visual Debugger.

  • Changelog
    • Implemented the second part of the Volition Engine, introducing an autonomy system.
      • Personica Brain Component → Details → Autonomy Template
      • Provide a Prompt Template to be sent to the LLM to determine ongoing actions.
        • Use {GOAL} to inject the Current Auto Goal (see below).
        • Bind to OnGetWorldContext to write to {SITUATION}
        • Add any relevant dynamic tags (see below)
      • Max Action Retries: the number of times that the brain will try to execute a failed action.
      • Enable Auto Loop: If true, the Personica Brain Component automatically calls the LLM to generate a new response once their Action Plan is exhausted.
      • Current Auto Goal: The ongoing goal for the brain to strive for.
      • On Utility Action EventGraphs: add an Advance Plan action to the Blueprint of a Utility Action to enable the action for use in the Autonomy system.
        • Last Action Successful: Use to differentiate between success and fail paths in a Utility Action’s Blueprint.
        • Allow Retry: If checked, the brain will automatically retry the action. References Max Action Retries (see above).
    • Action Plan System
      • Now, instead of only generating a single Utility Action to perform, the LLM can generate a list of up to 10 utility actions to queue and execute in order.
      • In Personica Brain Component → Details → Utility:
      • Brain Mode:
        • Hybrid: LLM makes utility action decisions, but utility score can override if above the Reflex Threshold (see below)
        • Utility Only: Character uses traditional utility AI scoring to determine actions.
        • Passive: Character is passive unless Request Decision or Execute Action are specifically called.
      • Reflex Threshold: The utility score value above which a utility action will interrupt the LLM’s Action Plan and execute.
      • Abort Plan on Failure: When checked, if the current Utility Action fails, the Action Plan is cleared and the character waits for another LLM response to determine a new plan. If unchecked, the failed Utility Action will be skipped and the next Utility Action in the plan will be executed instead.
      • Planning Step Count: The maximum number of Utility Actions in the character’s Action Plan. The absolute max for this is 10. Set to a lower value to reduce response time.
      • Queue Refill Threshold: If the number of remaining Utility Actions in the Action Plan equals this, the Personica Brain Component marks itself as ready for another LLM ping. If set to -1, the Brain Component will not automatically refill (Request Decision has to be explicitly called).
    • Action Set System
      • Assign a tag (Like Actions.Combat) to a Utility Action data asset (Details → Filtering)
      • In a brain component, assign one or more Active Action Sets (Personica Brain Component → Utility)
      • Now, the {AVAILABLE_ACTIONS} array will only list utility actions with the specified Action Set tag(s).
      • Meaning the LLM will only choose from actions that you make possible (no chance of a character sleeping while on fire due to a bad hallucination).
      • Example: Assign Actions.Combat to a character when entering combat, and if their health score drops below a certain threshold, add Actions.LowHealth to this list. Remove LowHealth if their health score increases.
      • If no Action Sets are implemented on a Personica Brain Component, all Utility Actions assigned to the character will be presented to the LLM as available actions.
    • Sorting actions in LLM prompt by utility score
      • Now, when creating the {AVAILABLE_ACTIONS} array, Personica automatically sorts the actions by utility score (highest to lowest). The LLM is more likely to choose actions high on a list, so it will be more likely to select actions with high utility scores, while not being locked into any one option.
    • Spatial sum consideration for Utility Actions (Useful for AoE utility decisions)
      • Use to modify the utility score of a Utility Action based on the congestion in a given area.
      • Examples: Use to boost the value of throwing a health potion at a group of allies. Or, invert to have a stealthy rogue avoid a crowded room when sneaking.
    • Dynamic tag injection: use custom JSON tags in prompts and bind to the event “On Resolve Context Tag” to inject dynamic information into your prompts.
      • Brain Component → Class Settings → Implemented Interfaces → Add “Personica Context Interface”
      • In the Interfaces panel on the left, open “GetContextForTag”
      • Drag out from the “Tag” pin and create a Switch on String
      • Add pin(s) containing dynamic tags (i.e., SHOP_STATUS, FACTION_RELATION, etc.)
      • Connect each tag to the value that should replace it in the LLM prompt.
      • Example: a Faction Relation score is lowered when the player is caught stealing from that faction, which in turn updates the {FACTION_RELATION} tag sent to the faction leader’s Personica Brain Component.
    • The provided key presets on the Prompt Template (for reasoning, memory, dialog, and utility) can be deleted/left blank to remove that key from being sent to the LLM. For instance, remove the memory and dialog keys for a character that just needs to decide actions. Reduces latency (LLM spends less time generating information you do not need).
    • Allowed editing the Prompt Template JSON Format Definition field, previously read-only. Use this to add dynamic tags or modify the help text for the key presets.
    • Debugger update: overhauled UI, improved text wrapping, access to both prompts AND responses for each Personica Brain.
      • Debugger is constantly being re-evaluated to provide useful Personica-related information. Please reach out directly if you have certain data that you would really like to see when debugging!
    • Added a helper to force llama server executable permissions on Mac and Linux.
    • Updated provided llama.cpp servers to their latest release (b7941).
      -Note: There appears to be a known performance issue with Qwen 3 models in recent releases. We will patch the server files as updates come in to fix this.

What To Change/Not Change

  • Existing utility actions should work with this update.
  • The previous method using ProcessPlaceholders to insert dynamic tags into prompts can still work, but using the Personica Context Interface is highly recommended and will be supported more in the future.

Renamings:

  • “Prompt Template” on the Brain Component was renamed to Dialog Prompt Template, in order to differentiate from the Autonomy Template, which uses the same Data Asset Type. Dialog Prompt Templates are called with RequestLLMThought, while Autonomy Templates are called with RequestDecision.
  • “Live Dialog Updates” boolean on the Brain Component was renamed to “Execute Utility and Dialog Quickly” to better reflect its function. If this flag is checked, dialog or utility action keys (like {dialog_line}, or whatever you specified in your prompt template) will be passed through any configured safety filters as soon as they are received and applied to the game world. If unchecked, the dialog and utility keys wait until the full response is received to be passed into the game world.

Announcing the Pioneer Sale: Get Personica AI for 50% Off Through March 1st!

Interested in exploring Personica AI? For a limited time, you can purchase Personica AI for 50% off on FAB! Get a dedicated LLM integration framework, powerful configuration options, and hybrid genAI/traditional-weighted utility AI system for half its usual price, and for a quarter of the costs of other leading utility AI systems.

Help Shape Meaningful LLM Integration in Gaming

By joining the Personica community at this early stage, users can help shape the direction of the plugin and ensure that Personica fulfills its goal of working across genres, formats, and operating systems. We consider and evaluate every point of feedback we receive!

Individualized Support & Service

Becoming a Personica user is different from other AI applications you may be familiar with; enjoy personalized support & service instead of megalithic corporate processes. We strive to answer any and all support questions, using human customer service and human knowledge. Our products may be AI-based, but our support services are not!

Save Time and Bandwidth

Save weeks of development time setting up your own custom LLM system, and gain unlimited access to an existing development kit that is being tested and improved by over two dozen users for just $40 USD. Built with organization & ease of use in mind, Personica comes with a robust and growing tooltip and documentation system that brings users up to speed quickly on the specific configuration options that Personica provides. Once up to speed, use the time previously spent building tedious dialog branches or behavior trees to instead craft deeper story lines, fuller mechanics, and a higher-quality game world overall.

Interested in Becoming a Pioneer?

Purchase Personica AI on FAB through March 1st, 2026 to join our growing product community!

Just Released: YummyBurger: LLM-Based NPC Autonomy Demo

We have just released a proof-of-concept demo of Personica’s ability to connect an LLM to NPC actions and game world changes.

Download for free on itch.io (Windows only, GPU recommended)

Without any player input, watch three Personica-controlled fast food workers maneuver between different stations, complete different tasks, and handle an increasing queue of customers.

This demo shows how Personica allows game developers to create characters beyond chatbots; characters can be given values relevant to them in their LLM prompt so they respond in real time to updated world conditions.

The characters’ actions are connected to Personica’s Utility Action data asset, which is essentially a vessel for EventGraph functions that create some change in the game world. While YummyBurger utilizes characters moving throughout a space and executing actions, this functionality can also be used on a non-character system, like updating UI or background conditions of a game world.

Interested in learning more?

Check out our documentation on the autonomy and looped-LLM-prompting capabilities of Personica AI here.

Version 0.9.3 is Released!

The latest version update includes agentic capabilities, improved semantic analysis, and improvements to background handling for multiple NPCs, among other improvements:

Intelligent Memory Retrieval (TF-IDF Scoring)

Memory retrieval has been completely overhauled. The Brain now uses TF-IDF (Term Frequency–Inverse Document Frequency) scoring to find the most contextually relevant memories for each conversation, replacing the previous keyword matching system.

What this means for your game: When a player mentions “the merchant who cheated me,” the Brain now correctly surfaces the memory about that specific event, even if the memory uses different words like “shopkeeper” or “swindled.”* Common words like “the” and “was” are automatically deprioritized, while rare, meaningful words carry more weight.

How it works: Every time the Brain retrieves memories for the LLM prompt, it builds a statistical vocabulary from the NPC’s full memory set and scores each memory against the current conversation using cosine similarity. This runs entirely on CPU in microseconds.

What changed: GetRankedMemories now blends two signals: 40% decay/importance (how recent and significant the memory is) and 60% TF-IDF relevance (how semantically related it is to the current conversation). The result is memories that are both timely and topically relevant.

Built-in stemming automatically handles word variants. “Running,” “ran,” and “runs” all match against each other. “Betrayed,” “betrayal,” and “betraying” all resolve to the same root. This works out of the box with no configuration required. English currently supported, with additional configuration in the future.

No changes needed to existing projects. The upgrade is fully backwards-compatible.

*When combined with a Concept Library


Concept Library (Synonym-Aware Matching)

A new optional data asset, UPersonicaConceptLibrary, lets you define synonym groups for your game’s vocabulary. This gives the memory system awareness of domain-specific relationships that pure word matching cannot capture.

Example: Map “angry,” “furious,” “enraged,” and “wrathful” to CONCEPT_ANGER. Map “merchant,” “shopkeeper,” “vendor,” and “trader” to CONCEPT_TRADE. Now a memory about an “angry merchant” matches a conversation about a “furious vendor” because both resolve to CONCEPT_ANGER + CONCEPT_TRADE.

How to use:

  1. Create a Data Asset of type PersonicaConceptLibrary in the Content Browser

  2. Add entries to the WordToConceptMap (word → concept tag)

  3. Assign the library to your Brain Component’s new Concept Library field

The system works without a concept library (pure TF-IDF + stemming). Adding one improves matching quality for your specific game’s vocabulary. We recommend starting with 50–100 entries covering your game’s key nouns, emotions, and roles.


General-Purpose Text Scorer (UPersonicaTextScorer)

The TF-IDF engine powering memory retrieval is exposed as a standalone Blueprint-callable utility class. Use it anywhere you need to rank text by relevance.

Available methods:

  • BuildCorpus — Index an array of strings (memories, lore entries, quest descriptions, dialogue lines)

  • ScoreQuery — Rank all indexed strings against a search query, returns sorted results

  • ScorePair — Quick one-off comparison between two strings (no corpus needed)

Use cases beyond memory retrieval:

  • Data table selection: Score rows against NPC context to dynamically pick the most relevant scenario, quest, or dialogue

  • Lore injection: Index your lore database, then pull the most relevant entries into the LLM prompt based on the current conversation

  • Dialogue fallback for LOD-Far NPCs: Select the best pre-written line for NPCs not using the LLM, based on conversation context

  • Item relevance: Find inventory items related to the current topic so NPCs can reference them naturally

All scoring supports optional stemming and concept library expansion.


Context Actuator (LLM Write Path)

The Context Interface now supports bidirectional data flow. In addition to reading game state into prompts, NPCs can now write data back to the game world through LLM-generated context mutations.

New interface methods:

  • SetContextForTag(Tag, Value) — Write a value to a context tag

  • GetWritableTags() — Return which tags this actor allows the LLM to modify (empty = all writable)

How it works: When an LLM response includes a context_mutations array in its JSON output, the Brain processes each entry and routes it to the appropriate context source. The target actor validates the write against its writable tags and either accepts or rejects it.

{ "npc_opening_statement": "The grain prices must rise...",

    "context_mutations": [

        { "tag": "GRAIN_PRICE", "value": "15" },

        { "tag": "RUMOR_MARKET", "value": "Prices expected to climb further" } 

    ]

}

New Blueprint events:

  • OnContextMutated(Tag, Value) — Fires on the Brain
    Component whenever a context mutation is successfully applied. Use this
    to trigger UI updates, game logic, or chain reactions.

Registering context sources:

  • RegisterContextSource(Source) — Register any actor implementing IPersonicaContextInterface as both a read and write target

  • UnregisterContextSource(Source) — Remove a context source

InventorySource Deprecation: The InventorySource property on the Brain Component is now deprecated. Existing connections are automatically migrated to the unified ContextSources system on BeginPlay — your project will continue to work without changes, but you’ll see a log warning encouraging you to switch to RegisterContextSource(). New projects should use RegisterContextSource() exclusively.

Safety: The GetWritableTagsmethod gives developers full control over what the LLM can modify. A lore manager might allow writes to biography tags but protect world history. An economy actor might allow price adjustments but prevent direct currency manipulation. Return an empty array to make all tags writable, or return specific tag names to restrict access.

Dynamic Tag Resolver: Context Sources Now Support Read Path
Previously, actors registered via RegisterContextSource() could only receive writes from LLM context mutations. Custom prompt tags like {SHOP_STATUS} would only resolve if the Brain’s owner actor implemented IPersonicaContextInterface directly, or if a Blueprint delegate was bound to OnResolveContextTag. Registered context sources are now queried during prompt tag resolution. Any actor registered via RegisterContextSource() can provide values for custom tags through its GetContextForTag implementation. The resolution order is: Blueprint delegate → registered context sources → owner actor.

This means you can now register a shop actor, economy manager, or any other context provider and use its tags in prompt templates without the owner actor needing to know about them.


Flexible Profile Data Dictionaries

The UPersonicaProfile now natively
implements the Context Interface and includes generic data dictionaries, allowing developers to expand NPC profiles without writing custom C++ structs.

  • Genre-Agnostic Variables: Add custom data to your NPCs (e.g., job
    titles, ages, or status effects) using the new StringMetadata,
    IntegerStats, and StatusTags properties directly in the Profile data
    asset.

  • Native Actuator Integration: Because the Profile acts as a
    registered Context Source, the LLM can seamlessly read these custom
    variables into its prompt and dynamically update them via
    context_mutations.

  • Granular Write Protection: A new WritableDataTags array gives
    developers strict control over which specific variables the LLM is
    allowed to overwrite, protecting read-only lore while allowing safe,
    dynamic status updates.


Flash Attention Toggle

Flash Attention (-fa) for the local inference server is now configurable through Project Settings and at runtime. Disabled by default.

Project Settings: Global LLM Configuration → Local Server → Enable Flash Attention

Runtime (for player-facing settings menus):

  • UPersonicaSettings::SetFlashAttentionEnabled(WorldContext, bEnabled) — Toggle flash attention and restart the server

  • UPersonicaSettings::IsFlashAttentionEnabled() — Query the current state to initialize UI toggles

The setting persists across sessions via SaveConfig.


JSON Resilience & Error Recovery

The Brain’s response parsing pipeline is now more resilient to model hallucinations or truncation errors that previously caused NPCs to hang in a “Thinking” state.

JSON Sanitizer/Soft Parse:

  • The Brain now includes an automated sanitizer that attempts to repair malformed JSON before it reaches the parser.

  • Automatically closes dangling braces ({}), terminates unclosed
    quotes, and strips out illegal Markdown code blocks (like “`json) that
    models sometimes include despite instructions.

  • What this means: If an LLM hits its token limit and cuts off
    mid-sentence, the Sanitizer will “soft close” the JSON, allowing the
    game to still extract the partial dialogue instead of rejecting the
    entire response. Or, if an LLM outputs an incorrect closing to its JSON,
    the Sanitzer corrects the output and allows the response to be
    corrected.

  • To toggle: select a Personica Brain Component and select Enable JSON Sanitizer.

Brute Force Regex Fallback:

As a last resort, the Brain can now bypass the JSON parser entirely using a regex-based extraction method.

  • If the JSON is so mangled that the Sanitizer cannot fix it, the
    Brain will scan the raw text for the dialog_line key (or your custom
    template key) and attempt to “brute force” the text out of the mess.
    What this means: Even if the model outputs 500 words of garbage, if
    there is a valid dialogue string anywhere in the response, the NPC will
    still speak it. Disabled by default.

  • To toggle: Select a Personica Brain Component and select Enable Regex Fallback.


Multi-NPC Stability Improvements

Significant improvements to the streaming pipeline when multiple NPCs generate responses simultaneously:

  • Global Generation IDs: All Brain Components now
    share a single monotonic ID counter, preventing ID collisions when
    multiple NPCs dispatch requests in the same frame

  • Sequence ID Guards: The LocalLLMManager’s full
    streaming pipeline (polling, chunk processing, completion) uses sequence
    IDs to prevent cross-contamination between NPC requests

  • Buffer Flush on Completion: When an HTTP stream
    completes, remaining buffered tokens are flushed immediately to the
    Brain’s DialogBuffer before the JSON parser runs, preventing
    partial-buffer parse failures

  • Spawn Stagger: Brains that auto-start on BeginPlay
    now stagger their initial requests using a sequential index, reducing
    the burst load on the local server

  • Cancel Safety: CancelGeneration now
    guards against clobbering a job that has already completed, preventing
    state corruption when snip timing overlaps with natural HTTP completion


Additional Changes

  • Live Monitor Enhancements: The Personica Debugger (SPersonicaDebugger) has been updated to track and display real-time, comma-separated readouts of the new StatusTags, IntegerStats, and StringMetadata dictionaries within the Traits foldout.

  • Llama.cpp updated to a more recent version (b8646).


Upgrade Notes

  • No breaking changes. All new features are additive. Existing projects compile and run without modification.

  • Concept Library is optional. Memory retrieval improves immediately from TF-IDF + stemming alone. Add a concept library when you want synonym awareness.

  • Context Actuator is opt-in. The context_mutations JSON field is only parsed if present. Existing prompt templates that don’t include it are unaffected.

  • **InventorySource is deprecated. Existing connections auto-migrate
    at runtime. Switch to RegisterContextSource() at your convenience. The
    {INVENTORY} tag continues to work through the dynamic tag resolver.