What is the underlying neural network library used in the plugin of Learning Agents?

The Learning Agents plugin is cool stuff for UE users to create AI bots. I want to know more about the underlying neural network library used in the plugin of the Learning Agents?

In our team, we use TensorFlow to build neural networks. How should I integrate TF into Learning Agents?

1 Like

Learning Agents uses a PyTorch during training and a custom inference engine for the CPU at runtime (NNERuntimeBasicCpu).

The way learning agents works, you don’t really need to touch the python training portion in terms of coding. You mainly build up observations and actions, then config settings and run training from within the Unreal Editor. The plugin will take care of any Python code for you.

That said, training in UE 5.3 / 5.4 is currently limited to PPO for RL and behavior cloning for IL. You can change these things if you’re willing to edit the python, but this will make it harder to upgrade when new releases come in the future.

Are you a game developer or a researcher?

1 Like

Thanks. I am a researcher working on RL. I am not familiar with UE and I do not have any experience in game development.

I found the related Python files in the directory of Learning Agents after compiling the UE source code.

I am going to customize this plugin for my research. After reading the source code of the Learning Agents, I have some questions:

  1. Since there is AdamOptimizer defined with C++ in the source code, why do you need a Python training file to run the training?
  2. When clicking the running button, how does the UE editor launch the Python running code?
  3. In the Learning To Drive example, can I build a .exe client of the game, and run the Python code to train the agents?
  4. In the intermediate directory of the Learning To Drive example, I found some trained models which are .bin files. What is the exact format of the trained models? How can I load them with Pytorch?
1 Like

Having an optimizer written is just a small part of training. We don’t have the differentiation code, gradients, any of that stuff working in C++ currently. We could get it working one day but we have a lot of high priority stuff to work on!

The Pytorch process is called the first time you call RunTraining(). You can also manually call BeginTraining() if you aren’t using the RunTraining() higher level function (we have multi-level API if that makes sense).

You just cook the executable and can run it headless like so:

YourGame.exe RaceTrack -nullrhi -nosound -log -log=car_learning.log

Where YourGame is the name of the executable, and “RaceTrack” is the name of your level/umap. You probably have to fix some of the paths in " Trainer Path Settings" to match the cooking output folder’s structure.

I’ve done this before and it works well but it’s unfortunately not very elegant at in the 5.3 release.

You can’t realistically/easily load the models into PyTorch. If you want to know the structure, you can find the serialization code in LearningNeuralNetwork.cpp. Your better bet would be to change the pytorch code and stick in a model.save() somewhere appropriate if you’re already gonna be hacking.

Thanks for the great questions!

1 Like

Hi, Brendan. Thank you for the helpful responses.

I was trying to compile the demo into a .exe file. The Trainer Path Settings are shown below:

The path of the built .exe file is in F:\Unreal Projects\LearningToDrive2\WindowsPackage0 and I can find the .exe file in this path.

However, the python.exe is not found.

The command used to launch it:

How can I fix this issue?

I have more questions about the training.

In the Learning Drive demo, I found that the Python code uses shared memory to gather the training data from the UE client/editor. My questions are:

Q1: How does the policy make the inference? Could you please tell me which part of the code I should read? It seems there is a model maintained in the editor, and it updates the model from the Python file from time to time.

Q2: The editor or client sends the data to the Python trainer via some API or protocol. Could you tell me where I can find it?

Q3: We can also use socket to send data from the client to the Python code. How can I set it in the UE editor?

You need to do something like this:

  • Editor Engine Relative Path: …/…/…/Engine/
  • Non Editor Engine Relative Path: …/…/…/…/…/…/…/…/…/Engine/
  • Intermediate Relative Path: …/…/…/…/…/…/Intermediate/

I’m not sure if full paths will work or not. These intermediate paths work for my particular setup when I cook using the quick launch in the editor and then run the executable from the StagedBuilds folder. If you want to build and package to a folder on say, your desktop, you will need to adjust these paths. (Eventually these probably should be in a command line argument but they are not right now sorry)

Inference runs inside the Unreal process, you can simply call Policy->RunInference(). Training also does its rollouts inside UE to gather experience from the game env.

Take a look at LearningAgentsTrainer.cpp: ULearningAgentsTrainer::ProcessExperience for more details. Basically when we gather enough experience, this “triggers” the synchronization.

Same as above, look at how ProcessExperience works and dig from there and I think you’ll be able to find what you’re looking for.

Sockets are currently only available in C++ derived classes (unless you exposed them to Blueprints yourself). The reason for this is that it’s not a “fully baked” feature yet and something we’ve put together mainly for our own internal usage. At a later point in time, we may support it better.

Thanks,
Brendan

Hi, Brendan,

I have been reading the source code and learning UE 5 materials for a few days. Your answers helped me to understand more about UE 5 and the Learning Agents. Sorry for the late response.

I am now moving forward to learning more about UE 5 and the Learning Agents. I need your help if I encounter any problems.

Thank you!

Cheers!

1 Like

Hi again, Brendan. I have been reading the source code of Learning Agents for a while. Do you know how can I upload the pre-trained model and load it into Learning Agents? I did not find the entry. Could you please help me?

Hi, Brenda, I managed to solve this problem by defining a new function called SetupPolicyWithFile function

	UFUNCTION(BlueprintCallable, Category = "LearningAgents")
	void SetupPolicyWithFile(ULearningAgentsInteractor* InInteractor,
		const FLearningAgentsPolicySettings& PolicySettings,
		ULearningAgentsNeuralNetwork* NeuralNetworkAsset,
		const FFilePath& File);

Then, I can use it in the Blueprint.

You should be able to use the snapshots we have:

1 Like

Thank you!!!