Training Livestream - Getting Started with AI - Jan 31 - Live from Epic HQ

normally they upload it on their Unreal Engine Channel on YouTube.

Any we’ll see Character interaction with AI(Running from player, Running to player, taking cover when it sees the player, etc) in this stream? So excited you guys are doing this!

will mostly be there for the Q&A portion of the stream. We’ll try to keep him on as long as we can.

Yep, that will be covered :slight_smile:
Edit: forgot, we won’t be covering avoidance, because that is a whole stream itself.

Fantastic read! I won’t be covering adaptive learning in this stream. That would take a bit to get to and I just want to focus this one on beginners. I will discuss how and why to “design for your game” as opposed to just making a “really smart” AI.

Sweet see how that works in that AL Engine4 works app games and VR games trying to design.

As has been in fortnight/paragon game support land forever, my question is when are there going to be features ported to mainline ? :open_mouth:

More specifically, if/when/where for below:
-Hotspots,
-Physical movement model
-Navmesh on moving platforms
-RVO 2.0 and/or Detour crowd behaviors ported
-AI messages refactor
-Gameplay tasks brought out of experimental to replace pawn actions
-Different agents on single navmesh
-Comment support in BT editor beyond copying comments from BT graph into the editor
-More advanced navlink features, filtering agents, built in events, bigger than edge connections

…to name a few.

Hi, will there be at least some Paper2D tips for AI?

  1. What would it take to get navmeshes on moving platforms?
  2. Whats the status no Gameplay tasks & Pawn Actions?
  3. What are the plans for AI in UE4 going forward?

This is really fast, I can’t understand anything! :confused:

Utility AI is a newer approach, any idea if UE4 is working to implement this approach with or in place of Behavior Trees? [Example: Weighting given to a series of values or states and the math compares grid and returns preferred action to take based on given weights].

Following up on question about nav edges.

When I use navmesh->GetPolyEdges(polys[v], ed); for each poly, it gets only inside edges (edges that have poly on both sides, not the outside edges of namesh).
I only get the blue edges not the pink ones (had to find them myself)

http://wlosok.cz/files/images/Capture.PNG

Am I doing something wrong?

I guess this function is misnamed - it should be “GetPortalEdges” since it retrieves only the edges that are traversable. I’ll make a note of it.

Is there a function to get all edges then? I already got what I wanted, but it’s super messy. I was just wondering, if there was an easier way to get all edges (or preferably just the outside).

When is this going to be available on YouTube ?

Are project files going to be released?

Thanks

Responses to more questions from the livestream from :

I’d definitely try to base such a solution on navmesh, preferably a static one co that information can be prebuilt with the level.

All of AI roadmap is currently on hold due to other tasks assigned to the (dispersed) AI team.

Out Navmesh implementation simply doesn’t support it.

The easiest approach is to manually markup the navmesh with navlinks (for example using the NavLinkProxy), create a jump area for it and have PathFollowingComponent handle that. There’s a tutorial in the docs on how to do it.

You can do most of this stuff in pure BP, but at some level of complexity it will become a maintenance nightmare

It works :smiley: check out Paragon bots

Multi-inheritance requires more caution and special case handling, and conflict resolution, etc, and practice shows you can usuly do without it. Even UE4 is using single inheritance! (not counting interfaces)

It’s there, just set bUseAccelerationForPaths in your movement component to true. You might need to play with other params as well.

Nope.

Just saw the stream and learned a lot from you guys and had a lot of fun too… It was fun to see you like getting mad with the other =P… Sometimes the thing would go off-script but I liked that, there were some useful unscripted tips and I didn’t mind that you took more time as it offers an insight on how the experts do things and keep everything neat… Thank you guys.

Thanks for watching! and I have a long history of giving eachother a hard time for laughs. We have a lot of fun! He’s always super informative.

Could i nudge a question real quick? How can i change in runtime my AIPerception lets say change the line sight radius or angle? Is it possible in blueprints? If so, how?

Thank you so mutch, this really helped me in my project. Im fairly new to unreal.

How can I specify different animations according to different usable objects?

What a great tutorial, where can we get more on the debug features? Also, i think there is a gap in the tutorials - i need basic information on how to save game data, say can I save and reload the Black board data… for each individual AI, perhaps one AI character can pass on knowledge to the player … say the the AI could hide or find, as the game goes along.So it would i make sense to have an inventory-like structure for objects/places/containers… with information that can be “compiled” and exchanged over time.

But in general you would want to save the full range of game-state information, say where should all Actors spawn at reload, and how do we select and initiate values at load time? I think the Game Framework tutorial (https://www.youtube.com/watch?v=0LG4hiisflg ) touch upon this, but there should be some basic/general UR4 design to handle such challenges?

Good question, you can do this in a number of ways. In general the “to be used object” would need to hold some values that the player/AI can use…. That is all objects (actors) can hold a Move-To-Location… and possibly a list of possible use-Actions.

Say, you can have a door, a chair and a bed - all will hold a “move-to-location” where you can play an open door-unlock/sit-down/lie-down (and the reverse) animations… - in general your special animation would be ‘implemented’… when you press ‘E’, you AI/Player will need to get some parameters and from that it will play the correct animation. (All Actors holds a move-to-location, but if that is zero… you would not want/need to play any move-to-animation)

Now add to this that the lock/unlock action could have a target location. So when your AI reach out to this Actor the hand will actually hit some point in space. While the move-to-location will hold you natural operation position, the operate-target-point will be lock-position (say doors are different and a having this you operate a range of other targets using the same animation - given that you can actually make an operate-target-animation (probably an BS-animation that takes the operate-target-point as a parameter?).

From this you can derive a range of targeted-animations that will invite you to develop “Actor objects” that can be “used” in different ways. Say, you could invent a new set of objects that can be “pushed”… and for these you would play the push-animation rather than the unlock-animation. That is if the AI/player does actually implement that action/animation. (A clever AI would be able to play these use-actions randomly… say your NPC-AI could randomly sit/open/lie-down/get-up etc. and find food/eat when hungry!

When designing that you would like to plan a bit… say what value can Actor objects provide, say “rest”, “energy”, “access”… and from that you will enable a range of simple options that will make your NPC look clever (Say can the NPC find a key to unlock the door?) . However - If you go that way you would also want the player to actually notice the AI difference, often random or even pre-planned behavior will be enough. (If you can order the NPC to find search for a key, rest when sleepy or find food when you are hungry … that would be cool?)