UE5.1 ML-Adapter: can we have a Hello World?

I am very interested by the push to use Deep Learning architectures in UE. Clearly, the ML Adapter plugin is meant to use UE as an environment (in Reinforcement Learning terms), so that standard RL algorithms can be trained on them.

I would like to know if we could have a very basic program where this is put in action. I beleve this would consist of an UE minigame like CartPole with a single agent, a few actuators and some function to output the game state as some vector of data and/or some image. And a jupyter/python script that uses OpenAI gym (or gymnasium) to train a RL algorithm on it. Which implies getting the game state, getting the possible actions the agent can take, and being able to “push” an action into UE.

This seems to be extremely promising, and I can’t wait to build on it, but I think the documentation is currently too hard to parse for an outsider to get a toy project running.


I was looking around and found this repo : GitHub - xmario3/UE5_ReinforcedLearning

and the associated video : Reinforcement Learning with Unreal Engine 5 and OpenAI Gym (ur10) - YouTube

It seems this is very close to what ML Adapter should do, am I right? With a bit of luck, the plugin should remove a lot of boilerplate code

1 Like


Thanks for your interest in ML Adapter. It’s great to hear that people are interested in deep learning in UE!

I haven’t written documentation/provided examples yet because I’m currently in-progress on exploring a completely different approach to using ML with UE. The alternative will be one where UE controls the ML training by setting it up and calling it directly. This is opposed to ML Adapter where the external training process is driving the training and interaction with the environment. This newer design feels a lot more natural for actually using ML as a game dev and has better performance vs the current approach is probably more natural for ML researchers.

Still exploring the design so no promises about the future.