Use navmesh as observation Learning agents 5.4

Hi!

I’m working on implementing agents that can navigate from point A to point B while avoiding obstacles (including non-static ones like crowds of people). So far, I’ve implemented a system capable of obstacle avoidance using Raycast. I’d like to enhance the system by incorporating the navmesh as an observation. Is this possible?

Thank you! :slight_smile:

1 Like

We haven’t tried it yet or looked into it. What were you hoping to accomplish with it?

Our project’s ultimate objective is to develop an agent capable of navigating urban environments and reaching a specified destination while circumventing obstacles and crowded areas. My idea was that using the nav mesh as observation could improving the performance in this task in complex scenarios.

Now I am using the following observations and rewards:

Observations: https://blueprintue.com/blueprint/0jzdgbud/

Rewards: https://blueprintue.com/blueprint/olyxk-7z/

Observations:

  1. Goal Position, Distance and Direction (the scale of position and distance is 5000)
  2. Rays collision distance and location, bool collision checker
  3. Time and pawn velocity

Rewards:

  1. Time Penality
  2. Goal Distance Reward
  3. Rays collision penality (distance from walls)
  4. Collision penality
  5. Direction penality (if there is a big angle between the last input and the forward vector I give a penality because I don’t want the pawn to go too much left or right)
1 Like

I am also interested in knowing how we can do 3D navigation ?

Did you consider using a behavior tree combined with learning agents?

Consider using the navmesh and normal AI navigation until you reach some goal where you need to use the ML and flip your neural network on?

This is my 2 cents without doing a deep dive into your project.

Thanks!