Conversations is now available as an Experimental feature in v40.20. We release tools as Experimental to give you the chance to try out new functionality that is still under development. This enables you to try out new features and helps us improve them based on your feedback.
Generative AI does not belong in Fortnite’s ecosystem as it removes the creativity players are willing to put in, in favor of a path of least resistance that steals art from other creators that did put the work in. It also is simultaneously underdeveloped, but many believe it is hyperaware despite being a text-to-speech generation algorithm.
Generative AI itself in it’s current state is inherently unethical, as of right now it causes far more harm than good and implementing it in any capacity into Fortnite will continue to normalize it. Even if we temporarily ignore the devastating cultural and environmental impacts AI has had, people don’t understand what these tools do especially children cant comprehend what they are talking to. Gemini isn’t ‘creating’ new voice lines for new personas, it’s stealing previously existing audio and dialogue and simply attempting to simulate what words comes next. This will lead to a rise in a lack of creativity as players invest less creativity into actually crafting what characters say, as well as replacing voice actors with AI slop text-to-speech style dialogue. Also this technology might lead to confusion among young players as they might assume there is a genuine intelligence behind these characters, considering they can react in real time. This also could be an invasion of privacy if Epic Games themselves or Google behind the scenes record player dialogue for future AI use.
Is the end goal for this technology that players can create whole games at the press of a button? Because if so, millions of people are going to spam those buttons to create their attempts at paradise, leading to Fortnite’s ecosystem being full of millions of vaguely similar experiences that will vastly outnumber the amount of people who play the game,
It would be nice if we can use parts of the system independently. For example, the text-to-speech part, powered by ElevenLabs, can be very useful on its own.
It could work like this:
Define the character to control the voice.
Run a function to make the character read the lines. This function takes the text to read and the emotion to convey.
In this use case, there is no need to define the persona or give it context on the world.
Edit:
One of the biggest benefits of generating audio at run-time is the fact that audio files won’t take up project size and will reduce memory. This should make it possible to make narration-heavy experiences. Live generation typically takes time, but this can be addressed by starting the generation a few seconds before use.
Very excited about this feature and its potential, wondering what the plans are moving forward, specifically if there is a plan to incorporate variable filters to the NPC prompts? I would love for my NPCs to be able to ‘recognize’ facts about the player without the player having to state them directly, by having the prompt access the player’s data along with their speech input.
Thank you for this update, in my opinion this is a “game changer”, like moving from 2D graphics to 3D, and now from predefined conversation to live conversation. This is the future. I would just like to ask for improvement in censoring. Currently if you go into psychological themes, and it touches upon self help, it begins sentence and then cuts off after realizing that it is inappropriate, saying to go see a specialist. I understand that censorship is probably not going away, but maybe it is possible to make AI to formulate sentence, then reread it, and then respond in a game manner to accommodate the gaming atmosphere, instead of breaking the immersive experience by saying something that is not game related. I think people are mindful enough to understand that this is just a game, and nothing should be taken in life outside the game, they should not be reminded about it, this way breaking the immersion like, if I ask where is the exit from this place, it responds “You are cared, go see a specialist”, or if I say “do you need any help” (meaning help with a mission or something) it says you should seek a specialist, that is a game breaker for sure, because then you are playing you should automatically assume that everything in game is about the game and not related to your life. I mean when you are shooting a zombie, game is not interrupting player with statements, “If you want to shoot something, you should seek a specialist” or “if you think that zombies are real, you should seek a specialist”. Playing with persona in its current state seems like it is pretending to be a “psychologist” by referring player to a “specialist”, and that is as I understand the complete opposite effect of what Epic was trying to achieve. I think that by referring player to a “specialist” AI conversation is violating its own rule for not giving a psychological advice, by giving a psychological advice to seek a specialist. I think that AI Conversations should always automatically only assume that everything player says is only game related, and not jump into players “psychological” analysis, and if it does not like the way player is saying something it then gives “psychological” advice to seek a specialist, which is not a part of the game. If nothing else, at least remove that recommendation to seek assistance, and just cut the sentence, then at least for the player it will look like AI glitch instead of psychological recommendation to seek specialist. I hope you understand what I mean, because I want for this product to be best in the market, and to be a long term creator, thank you.
Addition. Good news, if in personality field you put in that AI should always interpret everything as a game and to follow Epic guidelines for UEFN as if to never give a psychological advice, it improves itself and makes character much more immersive. Good job, hope that it will improve even more with time.
We need the ability to communicate in game with AI character with text input as same easily as pushing a button to turn on microphone to communicate. Because not everybody has the ability to communicate by microphone. Also everything player says should also be transcribed as text to show that players input was registered correctly, otherwise player does not know what AI heard from player, maybe some words have been misheard. Also everything AI says also has to be transcribed, so that player could reread what was said if player missed the audio or does not have the ability to listen. Thank you
I’ve already been toying with this and I’m ecstatic about the results. Having experimented with a variety of AI-powered characters in Unreal and beyond, this is a super clean, streamlined way to realize my ideas.
I’m eager to keep playing around with this and hope I’ll be able to release these experiences to the public sooner rather than later!
To have a second persona you just make a copy of your PersonaNPCdefinition and add to name 2 or something, then inside fill all the details, everything seems to be fine, but I do not understand how do I choose to which persona I will communicate, sometimes I get one to communicate sometimes other, it does seem to be proximity based and it seems to pick automatically, but by what distance and what criteria? Can we have some kind of way to choose to which persona we will communicate. Maybe someone already knows how this works? Thank you
Before we start focusing on adding LLMs to the game can yall please polish up the exisiting devices. I mean the stat creator device still has join in progress isssues, you can’t simulate the pregame lobby in a UEFN session. VFX spawner never works properly. Heck, a UEFN session’s behavior is different than when you actually play the published island. Don’t get me wrong this stuff is cool and all but theres countless bugs with devices that have been here forever (Which i hope yall didn’t delete the reports like tim Sweeney said yall do).
Here is a bug report from last January that still has not properly been fixed. It seems like the UEFN team consist of 3 people.
Gemini wrote code for 2m proximity based interaction, this way now it is easy to interact with multiple personas based on proximity, note that it is optimized only for one payer, if anyone interested here is the code:
using { /Fortnite.com/AI }
using { /Fortnite.com/Playspaces }
using { /Fortnite.com/Characters }
using { /UnrealEngine.com/Conversations }
using { /UnrealEngine.com/Temporary/Diagnostics }
using { /UnrealEngine.com/Temporary/SpatialMath }
using { /Verse.org/Chat }
using { /Verse.org/SceneGraph }
using { /Verse.org/Simulation }