I want to train a AI players by reinforcement learning, I found that there are two plugins in UE5, ML Adapter and Python Editor Script Plugin, both them have no documentation or demo.
For ML Adapter, I only see that it is using the rpc interface to connect to phyton, I have no idea about how to use about rpc, what should i do in UE5 editor and python.
For Python Editor Script Plugin, does it same as link the libtorch library? I have experience in pytorch, but I don’t know how to use this plugin in UE5 c++ programming.
Reinforcement learning in Unreal Engine 5 (UE5) can be quite a challenging and complex task, especially when it comes to utilizing the available tools and plugins such as the ML Adapter and the Python Editor Script Plugin. I’ll try to provide some insights and potential approaches for getting started with both.
1. ML Adapter Plugin
The ML Adapter Plugin in UE5 is a new feature designed to integrate machine learning into Unreal Engine workflows, often used to train AI players through reinforcement learning by connecting the engine to external learning environments.
Currently, documentation for ML Adapter is quite sparse, but I can give you some directions based on the purpose of the plugin:
RPC Communication:
RPC (Remote Procedure Call) is a protocol that allows you to execute code on another system (in this case, external Python scripts from UE5).
This means that UE5 acts as the environment, and Python acts as the learning agent, controlling the agents (AI players) via commands sent over the RPC.
You will need to set up both UE5 and Python to communicate with each other via an RPC server. Here’s what you need to do:
In UE5 Editor:
Ensure that the ML Adapter Plugin is enabled via the Plugin Manager.
Once enabled, you will need to create some blueprint classes (or use C++) to set up the RPC interactions.
Typically, ML Adapter will need you to create an interface for agent actions and observations that can be queried from an external agent.
Python Side:
The Python script acts as the reinforcement learning agent, which uses RPC to communicate with UE5.
You can use Python’s socket library to create an RPC-like mechanism or use a framework like ZeroMQ to handle message passing between the Python script and the UE5 editor.
You will essentially be querying the game state, then applying actions back in UE5 and repeating the loop to train the agent.
Since the ML Adapter relies on an RPC mechanism, it’s beneficial to have experience with RPC frameworks or networking in Python. Some popular choices for this are:
ZeroMQ: A messaging library that makes setting up client-server communication relatively easy.
GRPC: More advanced, used for communicating between services efficiently. It has a Python implementation as well as C++.
You can train the model using frameworks like PyTorch and use RPC calls to apply the trained model back into the UE5 environment.
2. Python Editor Script Plugin
The Python Editor Script Plugin allows for automation and scripting within UE5 using Python, but it serves a different purpose compared to reinforcement learning directly. This plugin is primarily for controlling editor-level tasks, modifying assets, etc., rather than directly integrating reinforcement learning models. Here’s some clarification:
Editor Automation: This plugin is more about automating repetitive tasks in the editor, such as batch renaming assets, procedural generation, and automating setup.
Not Directly Libtorch: The Python Editor Script Plugin does not provide direct functionality akin to linking to the libtorch library. It cannot inherently run PyTorch models in real time during gameplay. You might be able to script asset preparation, set up environments, or configure certain behaviors using Python, but it’s not the same as running PyTorch code during gameplay.
If you want to use PyTorch to train agents in a similar way to libtorch:
You’ll have to create your Python training script using PyTorch, and then when you’re ready to integrate the model, you’d export the trained model into a format that can be used in UE5.
Libtorch Integration: If you’re experienced in PyTorch, using libtorch is a reasonable approach to run neural networks inside C++ code in UE5. This involves including libtorch in your UE5 C++ project, and then you can use your trained PyTorch models for inference. It requires proper linking of libtorch to your C++ project, which can be somewhat complex, but it provides a direct way to run neural network inference within Unreal.
Suggested Approach to Get Started:
Simplified Testing:
If you are just starting with reinforcement learning in UE5, you might want to consider using OpenAI’s Gym to create a simplified version of your learning environment.
This allows you to test reinforcement learning algorithms in Python, where you have more flexibility.
RPC and ML Adapter Setup:
Explore some basic RPC tutorials to understand how remote calls work.
Create a basic RPC server in Python to interact with Unreal Engine. This means Python will need to request information from Unreal (state of the world) and then send actions back.
Using ML Adapter:
Define your agent in UE5, which should be controlled externally.
You’ll need to create a suitable “action space” and “observation space” in UE5 to represent what the Python agent will use.
Python Editor Script Plugin:
Use this plugin for procedural generation and simplifying repetitive tasks in the editor.
Consider using it to dynamically generate levels, reset environments, or change parameters in UE5 based on Python scripts, which is useful for reinforcement learning environments that require changes per episode.
Libtorch for Real-time Integration:
Once the training in Python yields a good model, convert your PyTorch model into a format suitable for libtorch.
Use C++ in UE5, link the libtorch library, and load your model for inference purposes.
Learning Resources:
Unfortunately, ML Adapter does not have much official documentation, but you can still use general RPC and Python-to-C++ interaction guides to understand how you might interact with UE5.
The Unreal Slackers Discord and Unreal Engine forums can be invaluable resources for connecting with others who have explored similar setups.
You can find examples and demos created by the community where they train agents using UE5 environments, even if they are not directly using ML Adapter or the Python Plugin.
Additional Considerations:
Model Update and Training Loop: Reinforcement learning typically involves frequent updates to the model. Make sure that UE5 provides easy ways to get observations and make actions in an efficient manner.
Custom C++ Plugin: It could be worthwhile to consider creating your own C++ plugin if you need more control over the communication between UE5 and Python.
This is a complex setup that involves multiple pieces working in tandem, but starting with simple Python-to-UE5 communication, followed by incrementally adding reinforcement learning logic, is likely the most practical way forward.
(The above is an AI I trained on Unreal Engine API’s - use and refer to at your discretion)