A brief intro to Learning Agents: a machine learning plugin for AI bots. Learning Agents allows you to train your NPCs via reinforcement & imitation learning. It aims to be useful in the creation of game-playing agents, physics-based animations, automated QA bots, and much more!
https://dev.epicgames.com/community/learning/tutorials/bZnJ/unreal-engine-learning-agents-5-5
Woohoo! 5.5! Can’t wait to dive in!
I’m editing right now. Give it a couple hours lol.
I decided to publish even though it’s not updated to work around some wonky tutorial editor issues.
new bring your own algorithm about to be clutch!!!
Yes, it’s definite a good step forward but its not perfect yet. We have some plans to make it better - but your feedback might be even more important. Take a look and let me know what you think.
Maybe we can get a tutorial on it eventually or someone in the community could make one.
I got the Learning to Drive 5.5 fixed up. Still need to test the other ones. Let me know if you find any issues.
Gonna try it on my Mac in the next few days
Just FYI you might have to fiddle with the timeout settings on Mac. Some other users were reporting issues here: Course: Learning Agents (5.5) - #6 by FlameTheory
So in this talk the Training time stats, what is the relevance? The presenter skips over them but training takes like hours likely… Am i missing something?
These are just the time costs of running the training. If the training runs faster per frame, then you are getting more training iterations completed per time, meaning faster convergence.
Excited to try this out! However the link at the bottom of this page (Learning Agents (5.5) | Tutorial) is broken: Learning Agents (5.5) | Course - have there been any further work in 5.6 or 5.7?
Hey, thanks for pointing out the issue with the links. It looks like there is some issue with the website because opening the links in a new tab works correctly, but left-click opening them has an issue. I reported this to the appropriate team so hopefully they will have a look soon.
Currently the 5.5 tutorial is the most up-to-date. There have been additions to the learning agents API since then, but most of the core functionality has remained the same. In 2024/2025, my job shifted from developing learning agents 100% of the time, to using learning agents on projects at Epic, which has been great because the library has gotten a lot of “battlefield” testing and works quite well. The downside is that I haven’t had as much time to migrate the tutorial with every patch. It’s quite the burdensome process.
EDIT: The issue with links has been fixed. Thanks for pointing that out!
Hey!
What could cause IL to perform worse than RL? I use custom car physics instead of the Chaos system and the agents struggle big time to adjust to the volatility of the physics and genuinely don’t seem to grasp the concept of how to get out of a donut once they managed to get themselves into that position.
Is it a matter of letting them train for a way longer time or are there settings I can change to combat the sledgehammer approach of pressing buttons when finesse is needed?
Appreciate the help!
IL can only learn to do things that you have given them demonstrations of what to do. So you would need example data showing how to get out of a donut. It can also be an issue with the observations, but if RL is working with the same observations, then this has a much lower chance of being the issue.
