Download

Social AI Behavior in games, Why are they so few of them? and are they a possibility in UE4?

Just a heads up before I start with the topic at hand, I’m not a programmer, far from it, so take everything with a grain of salt.

Ok, so what is this all about? You might or might not remember a little game called Façade launched in 2005, that game took the internet by storm, why?

It was the first game to feature an interactive AI on a social environment, you as a player type the answers to their questions and depending on the answer you provide, the game moves to a new Beat(event) inside the Beat everything is dynamic and on the background a new beat is getting ready depending on the outcome.

https://youtube.com/watch?v=GmuLV9eMTkg

https://youtube.com/watch?v=BSoB19aE9RM

Now that we are on the same page, my question is why is there so few games like this? when we have Alexas responding to all our inquiries, clearly the technology is there, Façade was release on 2005, that is PS2 era, and since then there hasn’t been any game trying to compete with it.

I want to understand a bit more of the difficulties implementing these systems, and the limitations with unreal engine 4, because I would really love for these games to be a reality, not just an obscure game genre from the internet.

Thank you for reading.

Novelty is a powerful, but a short-lived beast.

As you write, Façade came out in PS2 era -the teenager years of videogames. Everything was about shooting, being cool and rebelious. God of War wasn’t a thoughtful daddy, but a revengeful nothing, but angry "dude-bro. "Being on-par with contemporary art was a glistering star to where videogames can travel. Conceptual art pieces though don’t sell well as a product to mainstream. One of the major reasons why these simulators are not as popular.

From technical perspective, any branching you add to your game grows production scope immensely. This argument is somewhat auto-solved by utilizing Neural network and Deep Learning (your Alexa reference), but the results would be unexpected (hard to design around). If an experience is to be cohesive and coherent all paths and options must be thought of. The last bit could give a life to some wonderful random non-designed moments (aka Emergent game design). At the same time all of this is a big lottery that costs a lot of money, time and effort to put together, with very risky returns.

Still some try. Mike Bithell’s studio released Subsurface Circular. Few months old Heaven’s Vault also comes to mind. Both do very smart and new stuff with language. There’s plenty of ‘horny’ games up on Steam where you might find this experience to be similar to Façade. Obviously you can also look into the past of classic text adventures, or later Ultima, Albion and such, all of which heavily utilized text parsers.

All of that is just language. Add to it another variable layer of simulation, such as ‘social hierarchy’ and you might finally get an answer to your question. It’s a tough, risky, and technically challenging endeveour. It might work out, thanks to the novelty, which also means you need to be first.

Thank you for taking the time to response I really appreciate it, I agreed with all you said, especially the making a branching story line grows production I experience that myself. I can’t help but feel that games now a days are really uninspired or at least that’s the feeling I get myself from playing them, I really want to give this a try, thanks for all the suggestions I’m going to check them out :slight_smile:

oh and btw I made this game, it’s the first game where I did all the coding myself, so it’s nothing amazing but you can see what I’m trying to do, if I were to implement the social system.

hope you like it.

The limitation isn’t really a UE4 issue. UE4 could quite reasonably deliver amazing social interactions, the limitation is far more about how to give someone enough time and money to do something like this to the quality level where it makes a compelling case for this to be “the standard”. There was an EA game called Project LMNO where they were trying to do more of the social interaction thing within a more action based game. But it turned out that was too hard and management felt like the prototype didn’t offer enough to sell the concept.

There are plenty of things we can do make better games in this regard. But the costs of things like animations limit what the more experimental of us can achieve. Honestly, if I had a budget of around 100k I could make something pretty amazing (that budget would be mostly art for characters and mocap equipment). But there’s the rub, having a spare 100k isn’t something I’m likely to have any time soon. The biggest issue, is that the industry has largely sidestepped the need for this by employing things like cutscenes. So that any form of character exposition is simply replay of mocap, or sequenced animation.

There was an interesting attempt at something useful in this area in the VR space when Oculus studios (now closed) made Henry the Hedgehog.

Research wise, there is a lot of work around this aspect of characters in the interactive storytelling space, but it is limited by the budgets for art and animation again. My own research is trying to resolve part of that problem by applying computer vision techniques and deep learning to extract animation and behavior from movies. Although I don’t think that is perhaps the most useful route if you had a budget enough to pay for art and mocap, I hope eventually it will at least allow for better evaluation of character performances.

So, there’s not a clear answer other than “its hard” and “nobody has the budget to really address it” and “commercially, nobody really cares”.

Just look at how little effort Epic put into AI compared to rendering. You’ll understand.

Personally I DO care about it, so I’ve devoted all my research time into working on it. But I’d love to have even a small AA budget to try and address the art+animation side of things.

Social behaviour is immensely detailed. Our brains evolved to read more into tiny cues than an objective non-human observer might. If the cue is slightly wrong we notice it, strongly. If the cue is right it becomes part of a bigger flow of communication. For that reason realistic AI social behaviour can only come from a detailed neural network that, in order to be complete, would basically be us.

I remember Facade, although the game was short, it was a very impressive achievement. Will need to watch and read the accompanying research some time.

I agree with you, today’s games are impressive in scale but lacking in interactivity. I have thought a lot about the challenge of making games with what you call social AI. It comes with some challenges that are not impossible to overcome, but they need to happen in order so it’s gonna take a while…Somewhat of a thought dump incoming.

First of all, interesting moments in stories come from careful planning. Achieving a simulation according to some specification of social dynamics is probably quite doable, but then to make the game experience interesting and engrossing - it will require a smart combination of authored content and an AI simulation so that the AI does the planning that a writer normally does. For example relatively small dialogue trees are written by hand by a writer, and then the AI decides how to jump from one dialogue tree to another dialogue tree, or would alter the dialect or the tone of the sentences to match the character or the current context in the game. That technology doesn’t exist yet, but I think such technology may appear.in the future. There’s no shortage of people that want to achieve social AI in games or other virtual experiences.

Secondly, finding writers to work with that new technology would be difficult. Currently almost all games with choice use large written out dialogue trees. That approach, like more people mentioned and like you noticed, exponentially increase development workload as you add more options and you want those options to have an effect. But, the big advantage of dialogue trees is that it’s easy for an experienced game writer to imagine the conditions and outcomes. How can we add more options but keep complexity down though? It’s like I mentioned, prepare small pieces of content that the AI can link together. But AI driven dialogue that has more flexibility in traveling between pieces of content also makes it harder to imagine the outcome at the time of writing the content. A good game writer wouldn’t necessarily make a good writer for this new approach. People making the content need to be a mix of programmer and writer: highly creative to create unique experience, and highly precise to create content that will adapt to any circumstance. These people exist but I think it’s somewhat of a rare combination of characteristics and so the pool of people that would R&D such AI technology is small.

Third, human language is very peculiar so if we ever reach the state where the AI can decide what topic to talk about and what answers to give to your input (in a programmatic form), it will still need to sound like a human. Not even talking about voice synthesis, but the sentences we form are affected by the sentence we said before that, or earlier in the conversation, or we use verbage that the other person used. That’ll be an interesting challenge, but in the meantime we could have games simply describe what topics the character is thinking about. “NPC appears appreciative of the fact that you saved his farm. He thanks you for saving his cows. However, he didn’t like that you called his favorite cow fat.”

Then a side note: the funding for the R&D to achieve that technology. I don’t think there are incentives for big tech companies to invest into social AI for a game. Alexa has a clear, real world incentive to be developed: Alexa makes it easy for people to buy things from Amazon form their living room. Siri and such generate incredibly useful big data about people’s behavior as a population (and perhaps the companies learn about you personally as well). Games are the other hand are most of the time enclosed sandboxes, not linked to your wallet or to your email account. Perhaps a bit of a cynical take, but that’s the reason big companies don’t invest into social AI for games. Just a side note because I expect the technology to come but it’ll be either from universities or someone’s passion project.

Actually, this technology has been around and has had lots of research put into it for decades under “Interactive Digital Narrative (IDN)” and “Interactive Storytelling (IS)”. In fact Facade itself was an IS experiment. There have been a lot of those over the years.

I’m not so convinced about that. Have a look at Elan Ruskin? from Valve’s talk about their experimental system that produced “Two robots one wrench” (I know, groan). Their system seemed pretty easy to use for their writers at least.

The point is that for the most part the hurdles are 1) No commercial interest and 2) See hurdle 1 >:)

Ok, that and having a decent animation budget etc.

Weirdly, I was expecting Ken Levine to have produced something in this area by now, given he took his company off to have a look at this sort of thing. But haven’t heard anything recently. Which is a bit sad.

So TLDR: This isn’t really a technical problem more a commercial one.

Awesome, will check out IDN and Elan Ruskin, thanks for the recommends! Did Ken Levine ever hint that he was going to experiment with this kind of subject? I only know him by name from his work on BioShock and while I loved BioShock: Infinite’s story (spoilters for that game inc) and Elizabeth’s gameplay AI, [SPOILER]the main plot was rather linear - kind of a letdown when I played it a second time[/SPOILER].

It was a bit of a blanket statement that the tech doesn’t exist.

When I was doing comp science at university we had to learn a language called Prolog, which I didn’t enjoy at the time until I realized its amazing potential years later. To those who don’t know it, it’s a flexible query solving language where you provide symbolic knowledge and symbolic rules and can then query what other knowledge can be derived from what is provided. But what is interesting about Prolog is that the query can have multiple free variables and Prolog can find and return combinations of values that fill the free variables so that the query is satisfied. And that feature set is something that social AI needs: the ability to solve broad queries (how am I feeling today?) and return detailed information from which dialogue can be generated (I’m feeling great because I came across a cat and I got to pet it <= this being an actual gameplay event).

I haven’t found something that approaches my ideal tech yet, that uses logical programming but also can be integrated in a UE4 game. Last year I started prototyping my own logical programming solver for the purpose of eventually experimenting with this idea in UE4. That part is done but unfortunately I lost my ambition when the main challenge became understanding UE4’s graph editor code. :stuck_out_tongue: Would love to continue someday and add some contribution to this field in a UE4 ready way. I love game-ready social tech, like this plugin I made for Garry’s Mod a while back (not AI though): http://www.flexposer.com/

Months later I see you’ve replied. Thanks, Epic for the great forum experience lol.

Anyway, really enjoyed the playthrough you have on the itch.io.

Some solid Twin Peaks inspired experience!