Hi,
we’re currently looking into using LA to train FPS style bots that use raycast information, e.g. a vision grid. Typically, I would feed that into a 2D convolutional layer to extract information, but as far as a I can see, the LA plugin doesn’t let me do that (please advise if I missed something). Hence my question, which isn’t really limited to convolutional layers: which parts do I need to touch to expose additional pytorch functionality?
nne_runtime_basic_cpu_pytorch.py seems to handle the conversion between what the engine sent/receives and the pytorch API
LearningObservation.cpp builds the data that is send to the trainer process
LearningAgentsObservations.cpp exposes that to the user code
Did I miss anything?
(I can of course edit the plugin code, but it would be great if the plugin had some extension points, so I could make my implementation available to people who do not fancy recompiling the engine )
Sweat, thanks! Will check it out! I had it actually half-way implemented it myself, but I can abandon that now.
General question remains though. LA plugin and expecially the BasicCPU runtime module feel very closed off and everything is hidden away behind interfaces or private utility functions. Though I can swap out the model used in the python trainer with my own script, and produce an ONNX artefact for NNE, but that defeats a bit the purpose of the LA plugin to architectect models from C++.
If I could wish for something, I’d like to have a “custom” layer, where I can manually specify what’s serialized back and forth, which python module to use for training, and provide an evaluation function for the runtime.
What would you like to do that you can’t? LA’s lower level public API is pretty flexible and you can mix in extra stuff fairly easily, e.g. calling GatherObservations etc. instead of RunTraining.
@_YAF_Lightbringer
Something like that sounds good. We would need to work out the details but it might be feasible.
I just tried the Conv2d layer on ue-main (b1a5dcf49535bbdc388b3f1dd81bd72fe7c37d74), seems to work. Thanks.
There’s a compilation error though that needs to be fixed. In LearningTrainer.cpp:721 ff. the function UE::Learning::Trainer::IsObservationSchemaSubsetCompatible is missing a case for Conv2. It’s simple enough to add (I can open a PR if that’s easier for you):
There was also the fluke that my code didn’t run anymore until I set CustomTrainerModulePath in the trainer process settings to an absolute path, despite me not having moved any files around. But that may also be due to the fact that I’m configuring everything via code and not assets. So, I might have missed a new UPROPERTY initalizer of something.
Hey @Deathcalibur I was wondering if you could give me a quick tip on to get the 2d convolution observation working in 5.7, what do we input for the make observation? Appreciate if you’re too busy to answer right, but excited to try some new things.
For the make Conv2d, you pass in another observation which contains the data you want to convolve. For example, you can use MakeStaticArrayObservation() with a bunch of Raycasts inside. The shape is determined by the FConv2dObservationParams passed during the Specify.