My learning agents won't stop jumping, how do I fix this?

What is the proper way to implement jumping with machine learning agents?

Every time I give my FPS learning agents the ability to jump they just keep jumping non stop. I’m sure I’ve implemented it incorrectly. I used a bool action that triggers a jump.

I also added a null action that can block the jump action in hopes that the agents would use this to stop jumping as much but it didn’t work

This should work I think. What do you have the action noise set to during inference? Perhaps its too high

1 Like

Thank you for your response, I’m glad to hear that this method of jumping should work so I can look elsewhere for the issue.

I haven’t altered the default value for the action noise and I can’t seem to find where to set the action noise (sorry, I’ve only been using learning agents for about a week now so I’m still a noob). I found the “Initial Encoded Action Scale” but I don’t think that’s it. I tried calling the “Set Action Noise Scale” blueprint that I found from online documentation but it’s not coming up (even without context sensitive checked) and Google is no help at all.

If it’s not the action noise then I’m wondering if maybe my fall off penalty is making them prefer jumping non stop just in case they’re already headed for a pitfall, even if it only means living for half a second longer before receiving a penalty and episode termination. If that’s the case I think I just need to rework my reward values and retrain after a mindwipe. I also might add in overhead obstacles that block agents who are jumping erroneously b/c agents get rewards based on how quickly they reach the goal location.

I’ll try new reward scales and see if it’s on the right track, thanks again

Update:

After trying new reward scales, adding overhead obstacles, and getting rid of pitfalls entirely (as well as removing the fall off penalty) and then training for a few hours the agents are still jumping any chance they get. I also added a jump penalty (negative reward on condition for if the agents’ z location is 10 or more above their original z location) but no change so far even after resetting all 4 networks and retraining.

After some more looking I still can’t find how to adjust action noise unfortunately.

I’m gonna keep trying new fixes and post a new update later today.

When you run inference, you provide the action noise scale. Try setting this to 0 to start and then perhaps increasing it. I’m not sure that this is the issue but you should see some change I would imagine.

1 Like

Oh, I see now. I’m gonna try that when I can get to my computer and I’ll get back to you. Thank you

For some reason when I set “RunInference” to true by default it crashes Unreal. I can’t figure out what I screwed up here.

(post deleted by author)

I’m using Unreal 5.6 and this is the current setup for my learning agents manager, environment, and interactor. I’m pretty sure I’ve screwed up a lot of stuff here but I learn as I go, and when I don’t have jumping this setup produces some great results. With jumping they just don’t ever stop trying to jump.

Learning Agents Manager setup:

Environment rewards:

Environment completions:

Environment reset:

Interactor specify observations:

Interactor gather observations (Goal location, direction, and velocity observations, followed by a bunch of ray observations):

Interactor specify actions:

Interactor perform actions:

New discovery: I just found out that for some reason when I swap the jump null action and jump action orders so that the null action is before the jump action in the sequence then the agents stop jumping entirely. Jump action followed by jump null action = endless jump. Jump null action followed by jump action = never jump.


When the jump action is before the jump null action in sequence the visual logger shows a collection of agents with true and false values for their jump bool action, even though they jump every time they touch the ground the instant that they touch the ground. (almost instant, sometimes the agents stagger jumps by ~1/4 second, but even after training they never change)


The visual logger shows no sign of the jump action being called when the null action is put before the jump action in the sequence.

I know I’ve screwed something up but I really love using the Learning Agents plugin and have tons of ideas for using it in my games so I’ll do whatever it takes to figure this out.

After tons of tinkering and rebuilding I still can’t figure out how to use jumping with my agents properly. Is there some other action I should use instead of bool action maybe?

It’s driving me nuts because I can get all sorts of other actions to work properly but not jump for some strange reason. I thought this was going to be resolved with a quick Google search but now after hours and hours of experimenting and research I still feel just as clueless as I was days ago about what is causing this jump issue. I think I must be missing something obvious but I don’t know where to start. Plus, run inference is crashing my project every time I run it. Should I try using an older version of Unreal like 5.5?

I’m not sure why you have a start and stop jump action. Perhaps try just having the jump action, which returns true = jump this frame, false = don’t jump.

BTW your location scale is only 100 which is about 1 meter, you probably want to change this to 10000 or however big your level is to push it into the range of [-1, 1].

Hope this helps.

1 Like

I added the null action to set the bool StopJumping in an effort to stop them from jumping as much but I’ll revert to having just jump and try that, maybe it’ll work now that I’ve rebuilt most of the project.

I see, I was wondering if I was goofing up location scale. Thank you for your help. I’ll post an update later

I tuned the rewards and location scale so that’s looking good ([-1, 1] range), but so far my agents are still jumping non stop after taking out the null action and the StopJumping branch before the jump action. The null action for jumping wasn’t there to begin with, I only put it in to try to give the agents another way to stop jumping but it didn’t do anything (unless I intentionally put it in the wrong order where the jump null action is before the jump action in sequence, in which case the jump action gets blocked completely)

I’ve added a bunch of obstacles that block jumpers from progressing towards the goal location (they “clothesline” any agent who is in the air) but the agents just move so that they land right when they’ll be under the wall instead of taking a break from jumping and just walking under

I reset all 4 networks again so they haven’t been training for very long, but it doesn’t seem like it’s going to work so far. I’ll keep training them and see if they learn to stop jumping.

Thank you for helping me, I’m still learning and I really appreciate your time and patience.

Sadly, the agents still won’t stop jumping with the new adjustments even after 30,000 iterations

I think they keep triggering the jump bool action’s true or false too frequently so even if they jump 1/100 of the frames then they’re still gonna jump as soon as they hit the ground (but I have no idea if that’s what’s happening or not, that’s just my speculation)

I still wonder if there’s a different way to add jump to the learning agents’ actions that I don’t know of but is obvious to everyone else. Is bool action what you would use to make FPS learning agents jump, and if not how would you prefer to implement jump in a FPS learning agent?

Normally I have the problem where I can’t get my guys to jump lol. Usually its such an uncommon action that it isn’t getting used when it needs to be.

Have you 100% checked its not something in your game code? Probably by now you have but I’m at a loss at what your issue is.

1 Like

I replaced the jump bool action with a float action where floats given >= 2 will jump and anything < 2 will stop jumping. This works great! The agents are learning to use their rays to determine when to jump and when not to jump and they’re training shockingly fast. I’m gonna put pitfalls back in and see how they fare.

Thank you so much for your help, I’ve learned a lot about this subject from your tutorials and now even more from your help 1 on 1 which is hugely appreciated. I love learning agents and I can’t wait to use them in my games, keep up the great work!