Training Stream - Advanced AI - May 12th, 2015

WHAT
Training Content Creator is joined by Lead AI Programmer Zielinksi as they build on the Basics of AI stream and delve into more advanced AI! They will show you how to create complex AI behavior using Behavior Trees and the Environment Query System (EQS).

WHEN
Tuesday, May 12th @ 2:00PM-3:00PM ET - Countdown]

WHERE

WHO

  • Training Content Creator
    Zielinski - Lead AI Programmer

Feel free to ask any questions on the topic in the thread below, and remember, while we try to give attention to all inquiries, it’s not always possible to answer’s questions as they come up. This is especially true for off-topic requests, as it’s rather likely that we don’t have the appropriate person around to answer. Thanks for understanding!

Edit: The YouTube archive is now available here](Setting Up Advanced AI | Live Training | Unreal Engine - YouTube)

I would be really interested to see some details about the new perception system. Particularity how to extend it in C++ with more advanced senses.

I’d like to hear about how Meiszko views the long term AI roadmap for the engine. What features does he intend to work on himself in the near term, what features the community should produce etc. Also, I’d like to hear about his experience with regard to cover systems and the like.

Awesome can’t wait to hear more Ai roadmap so +1 from zoombapup.

Also, if you could discuss more info on the Ai work done for the deer in the kite demo for the crowd mentality that would be stellar! =)

Hey guys, I just want to remind that this is a training stream so the focus will be teaching you how to use the current tools. We may not have time for discussion about the roadmap, but we’ll do our best.

Hi , I think a lot of people watching the stream will be interested in the roadmap. If there are going to be significant changes or updates in the future that could affect how/when people implement their AI.

Really looking forward to this one, if only I wasn’t in a time zone that makes it so I’d have to get up at 4am to watch it…

I totally understand and we will try to answer all questions. We’re just limited with time and there is a lot to go over. It really just depends on how much time is left at the end to answer questions.

Maybe we can get on the regular stream to talk more about future plans before he heads back to if we run out of time on the training stream. :slight_smile:

I look forward to a more advanced discussion covering EQS.

If possible it would be nice to cover the area of spatial LODing of AI. Basically handling AI at scale with persistence, over large distances. Maybe general thoughts on higher level system organisation and delegation from manager classes.
Anything like that would be awesome.

Will there be functionality for running an EQS query from a task, or as a decorator or something?

An EQS Query can be ran as just a Task or it can be called anywhere from the Blueprint Node “Run EQS Query”. These are both available in 4.8. In 4.7 only the Task can be called.

If anyone has any more questions about the AI systems (or how to use them) after the stream, I’ll be around and can make some tutorial material to cover those areas. I figured it’d only be fair to do that AFTER the training stream so that can get a grasp of the new tools. Maybe we can have a twitch chat after the stream or something I dunno. Point being that there’s some cool new stuff in there thats very useful and it’d be great if we could get people up to speed with them.

See you Tuesday!

Looking forward to this one!

Q1. In the stream demonstrating techniques used for the kite demo, there was a short explanation about a dynamic nav-mesh system that built tiles on the fly as needed. How far off is this from being integrated and ready to use?

Q2. Something I asked about on the last stream, but just wanted to ask : any plans to implement a BT node/construct to allow choosing of tasks based on some weighted probability function? Currently it seems really hard to work around the priority-based structure when you want to introduce some randomness into the behaviour. At the moment I have a sequence which starts with a task used to make the randomized choice, followed by a selector which decorates each child with a test to see if that particular task was the one chosen. It’s very ugly!

[QUESTION] How can we draw the debug for AIPerceptionComponent? For eg:- I wanna draw the LOS Cone inside PIE mode or in simulating mode.
There is an option for Debug under the Dominant Sense in AIPerceptionComponent. How to use that?

[QUESTION] How to set a BOOL to false depending upon whether the AI has lost sight of the player? It always returns true once detected.

My Workaround :- Get the distance from the player and check it against LoseSightRadius value, and then casting a ray to detect whether the player is visible or not. Not efficient I think but does the job.

[QUESTION]

I know there is a system state machine within Unreal Engine called Pawn Action. It makes sense, but my question is what you think about using it to create a Task/Planner system? Have you attempted to expand on this or can you give us an example of how you’d use it. Thanks!

PS:
If anyone is interested in EQS & AI Perception. I ran a stream on it a few weeks back. Of course after this video, and asking question – I plan to create a new video with additional information. Hopefully they talk about how to properly view / debug the Perception tool. :smiley:

[QUESTION] How would I use the Pathing System of UE4 with procedural generated content? Is there a way to generate a navmesh at runtime or generate a navmesh on my own and pass it to the navigation system?

The blueprint node is exactly what I need :slight_smile: Looks like I’ll be porting to 4.8 when it gets a bit more stable. Thanks :slight_smile:

I think 4.8 has some features for doing a navmesh on streaming terrain for instance. I think it can also be re-generated at runtime already if that’s possible for what you need.

press ’ at runtime when looking at an AI than press 4 on the number pad to see the debug display of the AI perception component. The ’ enables the AI gameplay debugging component.

Just to quickly touch on the questions that have not been answered.

This is a topic for a long conversation. But the way I’d see implementing a system like that in UE4 would be to use HotSpots (that are not currently available) that would define each cover as hotspot’s slot, in combination with EQS to tell good covers from bad covers. This could be used for both static and dynamic covers, cut if someone did static (like in Gears of War) I’d also do a fair amount of precomputing to cache cover-to-cover visibility data for example.

We don’t support AI LOD-ing our ot of the box, and this is another topic for a long conversation. I’d look through all the AI Wisdom and Game AI Pro books (lot’s of good stuff there!).

It ships with 4.8, it works great in 4.8 prev 2 build and we’ll record a short video showing how to use it.

Utility- or probability based selectors are on the todo list and I’d very much like to get them in this year.

Not sure… can’t find it in 4.8 so we probably removed it! :smiley:

That’s actually one of the goals. The whole idea for pawn actions (that I’m currently redoing and will refer to as Gameplay Tasks, or in case of AI the AI Tasks) is that it’s an interface between AI’s body and all sorts of sources of “logic”. Logic in this case could be Behavior Trees, level scripting, AI “reactions” (like hit reactions), or any kind of externally implemented AI “brains” (like HTN, FSM, GOAP, etc).

There’s no way to have UE4’s navigation system use a hand-created navmesh. However with some C++ skill you could implement your own navmesh class (derived from RecastNavMesh) that could do that. We do support custom navigation types.

In terms of just having navmesh with procedurally created content all you need to do is to configure your navmesh to be runtime-dynamic, which means namesh will generate on runtime when new navmesh-relevant gets introduced, or old actors modified (transformed, destroyed).

A test scores get normalized across all items tested, and then the normalized score of each item is multiplied by test’s weight. There are ways to specify values to normalized against via UI.

That streaming time is totally jank.

[QUESTION] What’s the best way to run different tasks in the behavior tree besides using bools? Any future plans to making it easier so we don’t have to have 500 bools?

[QUESTION] Is there anyway to directly tell a behavior tree to run a specific composite and/or it’s task(s)? If not, any future plans on making that possible? Cause the current way with bools is pretty stupid, no offense.

The way I did it was to set a bool to true when the player is seen, add a Retriggerable Delay behind said node of X time. And then add another set bool behind that, that set the see player bool to false.