[UE5] Unreal Engine Support for Machine Learning

I’m wondering if there is documentation for using the : “Unreal Engine Support for Machine Learning” plugin inside Unreal 5.

1 Like

I haven’t seen any documentation yet, but only this snipped from the release notes. I’m really curious to see how it could be used as it opens up a whole lot of new possibilities.

3 Likes

Very cool. Hopefully it can make use of matrix accelerators such as tensor cores.

Is this NNI tool system, compatible for the use of making (Neural Network) in C++ then referencing it over into Unreal to pull data, like variables that I use on my players character to, track data, such as movement speeds etc. Then train that AI model, give the data back to the Neural Network, and have it learn movements of my player etc? Is this something NNI can help with? Also making it more performant on tensor cores etc?

2 Likes

Is this experimental machine learning plugin the only option at the moment for creating ML-based AI? I would very much love to have some way to utilize ML models for AI behavior, is that even something that currently exists inside Unreal Engine? I just started using Unreal Engine a few months ago, and I am still learning the ins and outs. I have experience using ML models and I would really like to combine ML models with AI behavior (like having ML models inside the behavior tree).

Is that even possible at the moment? Thank you!

Hello everyone,

“Unreal Support for Machine Learning” has become “ML Adapter” (MLAdapter | Unreal Engine Documentation) in UE5.1. You should be able to find it in the plugins browser by searching for “ML”. ML Adapter has support for NNI which will allow you to use previously trained models at game-time for inference.

ML Adapter is still a nascent plugin. It likely contains a myriad of bugs and its API could drastically change in future versions. That aside, ML Adapter works and I have been able to create some cool projects using it. You can define agents/sensors/actuators and train them via the Python API. Once you are happy with the results, you can export the model via ONNX and import them into the editor and run them via NNI.

If you have more questions about the code, feel free to reach out in more threads here or at ml-adapter@epicgames.com and I will try to get back to you soon.

Thanks!

1 Like

Any chance we can get a sample project, pretty pretty please ?

EDIT: As soon as I posted this questions I had a nagging feeling like “why are you being lazy, just read the source code, there’s definitely some helpful comments in there, there always are” and yes, it’s well explained here:
MLAdapter github
Python examples
In the plugin source code we also have examples on how to setup the server

1 Like

So weird thing:

  1. I set the whole thing up
  2. Trained my model
  3. Saved it from python, like so:
input_signature = [tf.TensorSpec([1, 3], tf.float32, name='input')]
onnx_model, _ = tf2onnx.convert.from_keras(agent.actor, input_signature, opset=13)
onnx.save(onnx_model, os.path.join(base_path(), "tmp\\ddpg\\ue_model.onnx"))
  1. Imported the ue_model.onnx in Unreal 5.1, works fine, model behaves as it was trained
  2. Then I close the editor and when I try to reopen I get a crash, with the attached callstack
    ONNX_Callstack.txt (5.8 KB)
  3. If I remove the onnx file from the Content folder, editor works fine, and I can repeat 4-5

I do not know if it works in a build and on all platforms hopefully :slight_smile:
Very interesting and promising so far, @Deathcalibur

Thanks for reporting the issue and awesome work figuring out how to use the plugin from the minimal documentation.

That error is very strange as I have not seen anything like it and the call stack isn’t particularly enlightening. I have always been using PyTorch in my testing, which shouldn’t make a difference AFAIK but maybe there is something strange in the ONNX file (?). Weird that it would work once though… hmm.

FWIW a lot of things are in-flight right now and it is increasingly likely ML Adapter will be sunset later this year in favor of a new plugin we are intending to release. I intend to do a blog post or something once that new plugin is ready.

2 Likes

Could you please share the source code that you have made?

Hey @AAbdelkader92,
This is the C++ code: MLAgentDebug.rar (7.2 KB)

You need to enable the MLAdapter and the NeuralNetworkInference plugins and add them to public dependency modules:
PublicDependencyModuleNames.AddRange(new string[] { "Core", "CoreUObject", "Engine", "InputCore" , "MLAdapter", "NeuralNetworkInference" });

In the .rar, I added the GameMode, Agent, Sensor and Actuator classes. The basic idea is that an Agent has Sensors and it will use them to get information about the environment and it also has Actuators that it will use to apply actions to the avatar.

In the GameMode, the function we need is:
virtual void ResetLevel() override;
The python environment will use it to reset the simulation. Python calls to unreal will instantiate the Agent, Sensor and Actuator classes, but it will not spawn the avatar. I spawn the avatar inside that function. In my case, a blueprint actor with 2 static meshes, one kinematic, one dynamic, connected by a UPhysicsConstraintComponent, a.k.a a pendulum.

This is the python project (I used pyCharm): RLLecture.rar (11.8 MB)

The MLAdapter plugin comes with a python class called ActionRPG, located here (in my case):
D:\EpicGames\UE_5.1\Engine\Plugins\AI\MLAdapter\Source\python\unreal\mladapter\envs\action_rpg.py
I used that as a basis, duplicated it, changed some variables for my test, created this:
ue_debug.py (1.5 KB)
Notice the def default_agent_config() function inside this class.
Then, you need to edit D:\EpicGames\UE_5.1\Engine\Plugins\AI\MLAdapter\Source\python\unreal\mladapter\envs\__init__.py to add the new class. Mine looks like this:
init.py|attachment (2.3 KB)

To start training, hit play in editor, and after that, run main_ue.py.

After training, import the .onnx file to unreal, update the path in void UMLDebugAgent::PostInitProperties() function and remember the bug I explained in my previous post. Set the AddAgents bool in GameMode and hit Play. Agent should behave as it was trained.

I didn’t spend too much time training it, this is what i got:
UE_Pendulum
For some reason, when it reached the top, it would swing right and repeat. :smiley:

My conclusion is that, this is fine for small agents, but after the pendulum test I wanted to create a more complex convolutional neural net, to use as a car driver. The problem is that python, has got to be, the slowest of them all, and 2 FPS while training something like that is just not practical. @Deathcalibur mentioned, I think, something about moving the whole thing to C++ in the next iteration. No more python, that will obviously improve training time. It also sounds like a lot of work making the same functionality available, because there are a ton of third party libraries available for python right now. :slight_smile:

2 Likes

I encountered the same problem as you when attempting to import a pretrained model with. ONNX Zoo’s onnx extension

After much investigation, I discovered that some of the data types and basic functions are not yet implemented in the NNI plug-in, which is the source of the problem.

This is my opinion.

@AAbdelkader92 I decided to try with pytorch, like Deathcalibur suggested and I have no problem. That bug is not present when I save my model from pytorch like described here.

1 Like

@Deathcalibur I just discovered that the NNI plugin is Windows, Linux and Mac only. I was working on a Nintendo Switch demo… :frowning:
Do you know if and when other platforms will be added ? The replacement for MLAdapter that you are working on will also probably be using NNI, right? So is it safe to assume that other platforms are definitely coming to NNI… ?

EDIT: After reading it I realize my question might not make sense… MLAdapter is just for training and is only supposed to work in editor, but we need NNI for packaged builds.

1 Like

@Titirez

Hello,

Thanks for your patience! I am happy to share that we have pushed out the very first version of a new plugin, Learning Agents! This plugin is similar to ML Adapter but flips the design on its head a bit: instead of the python training process “controlling” Unreal, the Unreal process is in charge and controls the python one. You can learn a little more about it here: Learning Agents Introduction

We believe this design is much better on several axes including runtime performance, flexibility for future growth, and fits many more use cases. For example, it’s much easier to replace a module in a traditional AI behavior tree with a model created with Learning Agents, where ML Adapter would encourage you to replace your whole AI altogether.

The current design’s foundations are hopefully relatively stable, but the plugin is still experimental so some breaking changes may be needed. We mostly intend to add new observations and actions to hit common use-cases, but the AddFloatObservation and AddFloatAction can be used for almost anything if you’re willing to do more preprocessing work yourself. We are also currently working on adding comments to all the codebase as well as a developer course which will get you familiar with the plugin and walk you through an example of building and training your first agent.

One other note is that the current training and neural network support is relatively limited. We currently are supporting “vanilla” feed forward models and a PPO RL implementation, as well as a basic behavior cloning imitation model. We intend to expand these options greatly while the plugin is in the experimental state.

Please check it out if you are so inclined and feel free to contact us here on the forums or you can reach out to learning-agents@epicgames.com. Any feedback is welcome and you can have a huge impact on the future of ML in Unreal.

Thanks!

2 Likes

@Deathcalibur, I will definitely try Learning Agents, thank you.
Also, I would like to take back what I said about python being too slow with MLAdapter, to train a convolutional neural network to drive a car. My algorithm was inefficient, but i kept at it and it aprox 3 hours of training I got this:

6 Likes

Hi @Deathcalibur , the Learning Agents plugin looks very promising! Do you have any plans of enabling native C++ training in games since PyTorch/Tensorflow already have C++ libraries? If this is possible, then we can train AI’s while playing the game, instead of training AI’s offline for future gameplay or game development.

As @Obiwahn89 said, neural network inference already opens up a lot of new possibilities, and NN training in gameplay can take it further.

1 Like

Hello NeuralNotwork,

Yes, online training is definitely something we would like to support in the future. It’s actually part of machine learning that I am most excited about for games; however, we don’t currently have a timeline around when we will be able to get it added.

If you want to play around with game design involving online ML, it actually works right now in the editor for both RL & IL (but you’ll be training in a separate Python process running concurrently with Unreal). You just won’t be able to package this up into a shipping build yet.

Brendan

Thanks, @Deathcalibur for the quick response! Yeah I got it that online training with a separate Python process is possible, but I’m really looking forward to the integration of in-game model training.

I saw there are already people trying to use libtorch (PyTorch C++ version) with UE, e.g., this post and this post, but without success. Can you estimate the amount of work for getting this working?

Although it may sound expensive to train neural models while rendering the graphics, let’s think about it this way: what will happen if we create an AI companion with very simple neural networks, but allow them to be trained whenever we play the game?

Another solution is to implement gradient descent manually, which is what looks most applicable to me. However, as a beginner in game dev and still a lot to learn in this new field, it would be too much work for me if I have to think about implementing in-game model training.

Hey NeuralNotwork,

Although it may sound expensive to train neural models while rendering the graphics, let’s think about it this way: what will happen if we create an AI companion with very simple neural networks, but allow them to be trained whenever we play the game?

I am familiar with online training and the cool things we might be able to accomplish with it. Prior to coming to Epic, I started an indie studio and we made a game engine which integrated Libtorch (Pytorch C++) and shipped a game with real-time, online deep learning running on the end user’s device: Human-Like on Steam

Online training is definitely something we would like to get added to Learning Agents, but no timeline on that.

Brendan