Hey everyone, I am trying to train an agent to play an endless runner game using Learning Agent Plugin. The set up is simple and I use the same principles as in Leaning to Drive tutorial. However, the Get Float Action in Set Actions returns primarily positive numbers making my agent run into a wall almost every run, which I combat with completion event. But this behaviour seems strange to me, the agent never runs into a left wall, which should be equally likely in the beginning of the training episode, if this behaviour is normal.
Am I doing something wrong or is it expected?
Learning Interactor Endless Runner posted by anonymous | blueprintUE | PasteBin For Unreal Engine