Learning Agents PyTorch Flexibility and Non-Policy .Onnx Support

Hello, my name is Josh. I am beginning to develop Agents in Unreal.

  1. I have yet to find the Python implementations of the various RL frameworks within Github or in UE5 itself. I want to see the conventions and I want to be able to add my own implementations in Pytorch.

  2. I intend to create the Adversarial Motion Prior in PyTorch that uses standard PPO with an interpolated reward from a discriminator that uses an animation dataset to impart a style. I noticed in a different section of the forum that support for discriminators is not explicit yet, but I assumed I could just throw away the discriminator at the end of training for now. I will want to save it later for use in other Torch implementations and my overall workflow, so any suggestions on how that might be achieved now would be welcome and appreciated.

  3. The dataset of the animations needs to contain velocity observations as well as positional, all relative to the root. I figure I have two ways to achieve this, either using the standard recording implementation, animating the character manually, then using the recorded observations for training. Or, extracting the joint motion from an animation file and calculating any other needed information such as linear and angular velocity in a file. I am not sure which is a more appropriate approach nor if the first approach is feasible and leads to accurate observations. I had trouble with Unity’s ML-Agent’s demonstration recorder.

  4. Is it advisable to use the NNE plugin with Learning Agents to save non-policy trained neural networks for use in engine and in game?

  5. Would it be advisable to use the Python/PyTorch plugin as part of the training process for non-fully connected networks such as a CNNs or Transformers?

I am essentially exclusively interested in utilizing ML algorithms in games, so it will be my entire focus. Any advice or other recommendations will be well received and greatly appreciated.

Thank you to anyone who responds.

1 Like
  1. I have yet to find the Python implementations of the various RL frameworks within Github or in UE5 itself. I want to see the conventions and I want to be able to add my own implementations in Pytorch.

Due to a quirk with how Python is distributed, you have to download the engine (or build from source) and python files will be under {InstallDir}\Engine\Plugins\Experimental\LearningAgents\Content\Python. You can’t see them in GitHub which is annoying.

  1. I intend to create the Adversarial Motion Prior in PyTorch that uses standard PPO with an interpolated reward from a discriminator that uses an animation dataset to impart a style. I noticed in a different section of the forum that support for discriminators is not explicit yet, but I assumed I could just throw away the discriminator at the end of training for now. I will want to save it later for use in other Torch implementations and my overall workflow, so any suggestions on how that might be achieved now would be welcome and appreciated.

You’re off the Golden Path with this use case (for now at least). You can probably get it to work by editing the existing Python files that come with the plugin.

  1. The dataset of the animations needs to contain velocity observations as well as positional, all relative to the root. I figure I have two ways to achieve this, either using the standard recording implementation, animating the character manually, then using the recorded observations for training. Or, extracting the joint motion from an animation file and calculating any other needed information such as linear and angular velocity in a file. I am not sure which is a more appropriate approach nor if the first approach is feasible and leads to accurate observations. I had trouble with Unity’s ML-Agent’s demonstration recorder.

I haven’t used Unity’s plugin in many years so I can’t really compare how their stuff works compared to ours. You could probably spawn the actors you care about and record their joints in a relatively empty level. I have done similarly using Learning Agents already. You can use the Array observations in LA 5.4 (for now from GitHub). The new Interactor API from 5.4 should make this a really clean implementation.

  1. Is it advisable to use the NNE plugin with Learning Agents to save non-policy trained neural networks for use in engine and in game?

You can use NNE directly if you want to run models that Learning Agents doesn’t currently support. You can create a sub-class of either the Policy or Interactor and add some NNE code. I have done this already to run an ONNX model I trained outside the Learning Agents and imported it using the NNE workflow. It works fine with Learning Agents - eventually I hope to provide some wrapper classes to make this a bit less work for users but that won’t come until UE 5.5 at the earliest.

  1. Would it be advisable to use the Python/PyTorch plugin as part of the training process for non-fully connected networks such as a CNNs or Transformers?

We have Attention support in the 5.4 version of Learning Agents, which isn’t exactly a Transformer… CNNs we have nothing for yet. You can always use Learning Agents + Unreal to generate data files and then train your model outside the editor (using Jupyter or vanilla Python), then import the model and run it on ONNX (the downside is you can’t ship the game using this as ONNX is only available at Editor time). For shipping, you would have to convert the model to something else.

Thanks for your questions. We intend on re-visiting the training API as part of 5.5 so your feedback is valuable!

Brendan

3 Likes

Oh wow, thank you so much that helps a lot. I will hopefully get back to you soon after fiddling around with Learning Agents a bit more. So far it looks really powerful and seems well suited to my first steps and my grand ambitions, however distant those are.

I’ve been considering how I can help support Learning Agent’s development. At the moment, I think if I learn the conventions for the PyTorch implementations I could create some additional architectures if that would be helpful.

I really appreciate the support, thank you again Brendan!

You’re welcome!

I would take a look at the 5.4 version on https://github.com/EpicGames/UnrealEngine/tree/5.4/Engine/Plugins/Experimental/LearningAgents
since you’re just getting started as there are major breaking changes from 5.3 to 5.4

Brendan

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.