Extending the Learning Agents plugin

Hi,
we’re currently looking into using LA to train FPS style bots that use raycast information, e.g. a vision grid. Typically, I would feed that into a 2D convolutional layer to extract information, but as far as a I can see, the LA plugin doesn’t let me do that (please advise if I missed something). Hence my question, which isn’t really limited to convolutional layers: which parts do I need to touch to expose additional pytorch functionality?

  • nne_runtime_basic_cpu_pytorch.py seems to handle the conversion between what the engine sent/receives and the pytorch API
  • LearningObservation.cpp builds the data that is send to the trainer process
  • LearningAgentsObservations.cpp exposes that to the user code

Did I miss anything?

(I can of course edit the plugin code, but it would be great if the plugin had some extension points, so I could make my implementation available to people who do not fancy recompiling the engine :wink: )

1 Like