Course: Learning Agents (5.5)

Thanks that works !

For anyone going through here, select a Socket Settings node, in the Details pane on the right, Default Value, Socket Settings, set Timeout to 30.0

1 Like

@Deathcalibur is there a way to access these objects’ values at runtime ?

What are you trying to accomplish?

I’ll probably be able to help you better if I’m able to understand what you are hoping to do.

I’m basically playing around to assess the capabilities of RL in Unreal, and the goal here was to make a live representation of the neural net learning. I was hoping to get the neuron layers from the settings object, and I was going to look into how to get the weights values next.

EDIT: i’d also be interested to know if there is a way to actually package the python dependencies to actually execute the learning process in a build, but if that’s not possible, some projects I think of can just run inside unreal. Or the python dependencies could be installed on the target machine for the build to run, if that may be an option.

EDIT2: for context, i’ve been looking at RL for 10 years from afar, and your Learning Agents article made me start learning Unreal and RL. Thank you for making this possible.

1 Like

Awesome, if you read this tutorial page I think it will make more sense where things currently are in Learning Agents and how you can run python in a cooked build.

If you want to do training at game-time, meaning as a game feature, it’s not well supported at the moment but it should be possible to compile the optimizer and stuff to run on NNE’s IREE runtime, but this is well off the beaten path.

Let me know what you think after you read that.

Thanks!

1 Like

Hi!

Driving headless with snapshots brings answers to how to run the learning outside of UE and retrieving the weights values, thank you.

However I did not find in the article an alternative to breaking out the variables of the Policy Settings structure, a quick search lead me to think that it may be missing an inheritance of a blueprint object type in the source code. Does it make sense and would you know of an alternative to access these variables’ values in the blueprint, according to my initial post?

1 Like

Yeah I might be missing a tag on the struct for you to be able to break it up. I can look into getting that added since its a 1-line change.

For you in the mean time I guess you compile from source or you just copy the settings into variables as impractical as that sounds.

Thanks!

1 Like


Hello, I just recently downloaded Unreal Engine, and right now learning how to use it.
When all steps of your guide were completed, I came across a problem with files from the torch python. I tried to check the dependencies of the dll files, and also deleted and recreated the project many times, but nothing worked because of this error. Do you know how to fix this?

1 Like

Hmm, I don’t think I’ve ever seen this exact error, but I’ve seen similar. Can you try this and report back:

  1. Close any open Unreal Editor
  2. Delete the contents of {workspace}\Intermediate\PipInstall
  3. Open the editor again

This will force the Unreal Editor to automatically re-run the pip installer. There’s a rare chance something failed to download last time it installed the python dependencies. One time I saw a similar issue and doing this clean and restart fixed it.

Let me know if you get unblocked or not! Either way is good info for me.

Good luck, and thanks for posting!

Yes, thank you. This error has gone, but I still have some issues with this project. If I have any questions, I will write here.

Now I have the same error, but recreating files in PipInstall doesn’t work this time at all. What can cause these errors in the first place? Because even when I send someone this project, they have the same error.

Second Question: Is there a way to get the current number of iterations?
I want to update the training env with lap driving once the agents are trained enough, tripping my lap bool in the manager which changes the reset location, a triggerbox in the env, and the reward structure.

I’m attempting to run the training with saved snapshots, but the headless training won’t load the snapshots properly. It can run with fresh data, and I can run snapshots in editor, but even when hardcoding the paths to the snapshots it won’t load them properly in headless.

I have log lines reporting fastest lap timings, and those are reporting 30.46s in editor, but I never even get a lap reported in headless.
image

1 Like

I ran into an issue with the BP_SportsCarImitationTrainingManager tutorial using UE 5.5. Here is how I got past it in case it helps others.

The Make Imitation Trainer node now has a Communicator pin not shown in the tutorial. To create the Communicator node

  1. Insert a sequence to create a Communicator variable by copying the the nodes from BP_SportsCarManager
  2. Change the Trainer File Name from train_ppo to train_behavior_cloning

Hope this helps

5 Likes

That’s really annoying. To be honest, I’m not entirely sure what causes this problem other than I know it has something to do with the pipinstaller. Another person at Epic is the expert on the pipinstaller and I believe they are already working on updating this entire experience to make it better, so hopefully in time this issue will be resolved. I hope it’s something you can work around for the time being. Not sure when or if the solution will land in the near future.

@NEmeryPW Let me know if you have any errors or more information. I’m guessing the paths are still an issue. This is not an ideal fix, but you could load the snapshots in the editor and save them into the uassets, then cook and continue training from the uassets. Again, not ideal but something you could try to keep moving forward. BTW I noticed you don’t have an encoder data asset bound in your image, is that intentional?

@SimonSmoke2Much Thanks, I should get that updated!

Hello,Mr. Brendan, I have encountered a problem, finally I solved it怂Here is the problem, hope to be helpful.
In ā€œ1. Learning to Drive(5.5)ā€->ā€œImplementing the Managerā€->ā€œBegin Playā€ļ¼ŒI copied the blueprint to local engine, and I did almost everything, but there was still a problem. In ā€œThen 1ā€ branch, the engine said ā€œcannot find track spline in BP_SportsCarManagerā€ļ¼Œ although I specified to use ā€œBP_SportsCarTrackSpline_Cā€ in Actor Class. Finally, I removed these nodes and add them again(by hand, not copy paste),it worked.
I am a rookie on UE5, and my English is poor, hope you get it. Anyway, thanks for this tutorial!

Hello everybody, when doing the tutorial part of imitation training I’m getting across this warnings just after a few seconds of hitting play:

LogScript: Warning: Attempted to access SportsCar_PawnML_C_UAID_50EBF64B624EEB4002_1665053517 via property ControllerActor, but SportsCar_PawnML_C_UAID_50EBF64B624EEB4002_1665053517 is not valid (pending kill or garbage)
    BP_SportsCarAIController_C /Game/VehicleTemplate/Maps/UEDPIE_0_VehicleExampleMap.VehicleExampleMap:PersistentLevel.BP_SportsCarRecordingManager_C_UAID_50EBF64B624ECF4402_2009780741.LearningAgentsManager.Controller_3
    Function /Game/LearningAgents/BP_SportsCarAIController.BP_SportsCarAIController_C:EvaluateAgentController:0059
PIE: Error: Blueprint Runtime Error: "Attempted to access SportsCar_PawnML_C_UAID_50EBF64B624EEB4002_1665053517 via property ControllerActor, but SportsCar_PawnML_C_UAID_50EBF64B624EEB4002_1665053517 is not valid (pending kill or garbage)". Node:  Return Node Graph:  EvaluateAgentController Function:  Evaluate Agent Controller Blueprint:  BP_SportsCarAIController

Weird thing is that it runs fine and all of the sudden this happens. Any thoughts?

im getting an error, showing that the Spawn actor is not found
,also torch.nn module not found

1 Like

yeah, Im getting the same warning about the spawn actor

This was caused by my pawn settings, I activated collisions so yeah…but now that I mention this, what could be a way to look out for collisions and prevent them