Cartwheel (Generalized biped walking control) test with source code


I found the open source cartwheel project a while ago and tried integrating it into unreal. Cartwheel is used to control articulated biped characters. It supports the following.

  • Inverted pendulum foot placement
  • Uses the Jacobian transpose to compute torque gravity compensation and to apply a virtual force to help regulate velocity
  • Reacts to external forces e.g push / hit character. It also adjusts to match the character’s weight distribution e.g backpack, carrying a large gun in one arm (lean).
  • Can create different walk styles which can be applied to different character types e.g drunk walk applied to giant or dwarf
  • Can set target direction, movement speed, step height. Reduces leg overlap on turning
  • The demos show tasks like reaching, pulling and pushing a crate, lifting and moving a crate, navigation over and under obstacles, stairs and crowd simulations.

The videos from the site explain it better.

The base code has been slightly modified to compile in unreal (c classes => c++, ignore warnings etc). It would probably be better to just use a library. Some of the python scripts have also been ported. It’s all been hacked together because I just wanted to test it. I probably won’t continue working on it due to performance issues.

The main downside I found with cartwheel is that the simulation requires a very small time step to be stable (2000 updates per second). After around three characters the frame rate dropped below 30 fps. Another problem is that a lot of the samples in the videos aren’t included in the python scripts. The ode and unreal coordinate systems also don’t match.

Code setup
Download the cartwheel original source from here(required to convert styles)
Create a project called “Cart” so you don’t need to change all the includes.
Add the source code.
Open Cart.Build.cs and update the include paths.
Add a SNMApp actor to the scene. In the cart group make sure both checkboxes are ticked, that “Smulation Seconds Per Second” = 1 and “Desired Simulation Timestep” = 0.0005.

Basic physics test
Add a plane. The add a CWRigidBody component. In the Cart settings group: tick “Enable rigid body”, tick “Locked”, mass = 1, Friction Coeff = 2.5, Restitution Coeff = 0.35. Then add a UCWPlaneCDP component as a child of the rigid body. In the cart settings: origin (0,0,0), normal (0,1,0) [NOTE: y axis is up].
Add a cube somewhere about the ground (y axis). Add a CWRigidBody component (same settings as plane). Add a CWBoxCDP component to the rigid body: top pos (-50,-50,-50), base pos (50,50,50).
Run the simulation and make sure the box hits the ground. From the left view the box should fall to the left and the ground plane will be vertical.
It just uses debug lines so it’s easier to see in wireframe mode.

Convert walk styles
If you just want to try the converted types included you can skip this step (Just add BipV3_EditableWalking from the Content folder inside the attached zip).
Walk styles can be created in the orignal Cartwheel app. Existing styles can be found in “cartwheel-3d\Python\Data\Characters”
Create two data assets. One CharacterControllerDataAsset called “BipV3_EditableWalking” and one CharacterControllerDAConverter called “Convert_BipV3_EditableWalking”.
In the “Convert_BipV3_EditableWalking” data asset assign the output to “BipV3_EditableWalking”.
In the oringal cartwheel source code find “cartwheel-3d\Python\Data\Characters\BipV3\Controllers\”. Open it and trim out the start and end and past in into the converter e.g should start with "name = " and end with “]”, not “)”. Or just use the attached file (BipV3_EditableWalkingExample.txt).
In the scene add a CharacterControllerDAConvActor and tick “Convert data” and set converter to “Convert_BipV3_EditableWalking”.
Run / stop the game then open “BipV3_EditableWalking”. Add any character to the name field and save (doesn’t seem to auto save on close if it hasn’t changed). Untick “Convert data” in the CharacterControllerDAConvActor or just delete it.

Adding the character
Make sure the BipV3 data asset in included from the download.
Add an empty actor. Add a InstantChar component. In the cart section set “character description” to “BipV3”, set “character controller data asset” to “BipV3_EditableWalking”, tick “Enable Character”, untick “Log controller” and tick “Override behaviour setting” if you want to control the character.
In the “Cart Behaviour” section update the setting to change the speed, direction etc. Note: behaviour heading are in degrees. This will only work if “Override behaviour setting” is ticked. Example settings for the character: “Behaviour Speed”:1, “Behaviour Step Time”:0.6. “Behaviour Coronal Step Width”:0.1, “Behaviour Heading”:20.

Try adding a box in front of the character or drop an object on them. The character should be able to handle collisions. If they do fall over they currently keep trying to walk (should be disable instead).
Some classes aren’t include from the original project including: ControllerPerturbator, HumanoidIKCharacter, RLState, UserInteractionPolicy. They might be worth checking.

This is very exciting. We also tried something like this a while ago, but pure physics based without ‘helper’ forces to keep the biped upright.
In the method above there are such ‘helper’ forces to keep the biped upright I think.
We constructed an active ragdoll, i.e. only rigid bodies and joints. The joints have motors to move the rigid bodies.
Then every single movement is caused by those joint motors. No helper forces or anything else. A UE4 walking robots so to say. Terribly difficult to get right.

Here is what we did with biped walking.

This here seems pure physics based, i.e. no any helper forces or animation blending, but only rigid bodies and joint. We always found this quite impressive.

Wow! A 0.005 fixed timestep! Is that due to PhysX or Cartwheel? Or possibly that the ODE/PhysX coordinate systems don’t match? That’s pretty interesting…

@BlueBudgie: I don’t think cartwheel necessarily uses helper forces… though it might pull some of the simulation into ODE, which is not ideal. Have you guys thought about releasing the project or the source code so we can see how you did it? It’s very impressive!

Hmm sad to hear that, looking very cool.

Wow, impressive, love the outtakes. ^^

Or maybe this. You see in many situations that it is not a true simulation. Like whenever the biped lifts a leg, it does not tilt one bit to the side (3:15min), while it also does not do any re-balance motion to actually stay on one leg (like shifting center of mass on top of the standing leg). In the other video with the creatures you see how the bipeds shift weight during each step to actually not tilt to the side when they lift legs. Also the upper body and arms react to help shifting the weight. Anyway, just nitpicking a bit because that bit of shifting weight kept us quite busy with our biped.

I tried my hand at genetic evolving of muscle controlled based “animals”, but i could not find a good way in unreal specifically to do real simulations of the “animal” without rendering it. Can this be simulated in memory ?

You would do the ‘animal’ as a active ragdoll with rigid bodies and PhysX motors, then develop the algorithms to control the PhysX motors in a way that you achieve walking (no helper forces or other shortcuts, just the joint motors). So it would be simulated in the UE4 physics thread.

I do not want to highjack the thread for self-advertising, but this is exactly what we did with the Bot in our game and what gives it the real life remote controlled toy behavior. It is kind of a four legged PhysX ‘animal’, or a ‘PhysX joint motors robot’. It can walk a bit, but it was hell to get there, and it is only useful walking if the terrain is not too bumpy. You would not believe how difficult true physics based walking is, even on four legs. Therefore the rocket thrusters and the other PhysX toys :slight_smile:

Anyway, hope the OP keeps going with this. I would not waive at it too early because of some performance issues.

I though it didn’t use any helper forces but the balancing in Part 2 at 3:15min is very suspicious. In this video I tried increasing the time the leg stays in the air. I think that standard walking doesn’t use helper forces but I’m not sure about the task demos (missing source code).

I really liked the demos, I’m actually trying to do something very similar. It would be cool if Boston Dynamics made video games, fighting something like BigDog and Atlas with physics reactions would be awesome. I’m probably going to try machine learning next, darkZ has some coolpublications on implementing physics based controllers for bipeds.

Yeah, the 0.0005 (extra zero) timestep is a bit much. Cartwheel uses ODE for its physic engine so it doesn’t require PhysX. In ODE y is up and in unreal z is up, so it has no effect on performance, it just makes the ground plane vertical instead of horizontal. I’m guessing it needs this timestep to handle foot impacts with the ground (bouncing, sliding).

I’ve tried running Simbody for muscle simulation in UE4. It was really easy to setup but it has a weird collision system. I found the PhysX experimental articulation system works really well for matching physics animations but it’s a bit of a black box. The following examples don’t have any controllers, they just playback animations.

Is fixed timestep actually required or can it also do semi-fixed or similar to cope with frame rate variations?

The breakdance video makes it looks very suspicious :slight_smile: This is something that is so difficult to achieve with 100% physics simulation, if this was a real simulation the creator would have won the tech Oscar by now :slight_smile:
It would be so demoralizing if this was a true simulation… we do 100% simulation with no helper forces or shortcuts for all our work and look at how our biped ‘walks’ :frowning: (from 4:40min)

Is it possible to make a simple movement pattern in CarthWheel where the biped just lifts one leg and keeps standing like this to see the balancing/simulation? Or where it walk down some stairs to see how it falls?
Also somehow it looks like inertia, at least for the upper body and arms is very small. or maybe it is just the visualization.

No need for Boston Dynamics to make a physics robot fighting game, we will get there.

I think the fixed timestep is required. Although the characters can still walk with higher values, they don’t respond well to collisions (fall over). I forgot to mention that the physics time step isn’t related to unreals frame rate. In unreal the Update(deltaTime) it just loops through the physics update e.g unreal Update(deltaTime = 0.02) then call odePhysics.Update(deltaTime = 0.0005) 40 times. It could probably get better results using something like Bullets Featherstone implementation.

It seems to use the Jacobian Transpose a lot. I guess the small time steps help to reduce errors because the Jacobian transpose math is probably based in real world time. I can’t remember the technical name for it but it’s where the real world would be a smooth graph but the game physics results in a linear time step graph e.g velocity / acceleration.

There is a BipV3BalanceController class but its not setup (hard to balance on one foot). It also needs extra code for stairs. It looks like it uses the Jacobian Transpose to create virtual forces. So as long as a foot is on the ground you can ask it for the joint torques required to maintain balance and keep the upper body upright. There aren’t any joint limits so when the character hits the ground it still tries to keep the body upright. To test this I changed the gravity axis and without its feet on the ground it couldn’t keep the upper body upright.

It’s probably just using linear momentum for balance. I thought this angular momentum balance example was amazing (the sliding on ice part is great 2:40). It might be possible to extend cartwheel to do something similar.

I tried making a Biped using standard PhyX joints before (not the same as my previous articulation videos). It looks like you made way more progress than me. I couldn’t even get it past three steps before it would fall over. I was trying a Simbicon controller but never managed to get it working.

I found these sites interesting.

What’s interesting is that they’re using a ridiculous simulation timestep as well…

I guess it was purely for ground-foot collision stability?

Those are some incredible results though…

Wow, didn’t spot the 10khz step size. After seeing this I think I might try another approach. It looks like the 3D One-Leg Hopper (1983) by Marc Raibert was the starting point for Boston Dynamics. You can find a lot of his original papers on the internet. The One-Leg Hoppers algorithm is reasonably simple and doesn’t require much processing. I might try to simulate the hopper and work up to a two or four legged machine.

I don’t mind this thread getting hijacked. I’d really like to find out about similar projects like BlueBudgies or links to related papers.

The machiotto approach from the video looks interesting. Momentum control is something we also thought about, but then stopped the project. Idea was to measure the velocities of the limbs and use that for input to derive/support the balance reactions.

Can we clarify some terms: the Jacobian is used to numerically solve higher order inverse kinematics bone chains (> 2 bones), right?
For our biped we did not see the need somehow to go higher than 2 bone IK. This can be analytically solved with some cosine relations, very quick. And only arms and legs are IK. The other joints like ankle, hip or head are single joints decoupled from the rest
What we simply do is to monitor the center of weight of the ragdoll biped and then program all joints and the 2 bone IKs making sure the COW is pushed towards the top of the legs or wherever we want it. Like when the biped is pushed forward, the arms go back and the ankle joint push back and so on. We basically just looked at what humans do with their limbs when they get off balance. Had the kids doing gymnastics, so to say, in front of us. Then just programmed the joint reaction in.

Additionally we can move the COW artificially to induce certain limb movements, like for leaning to left side we can push the COW to the right side and the biped thinks it is off balance to the right side and start putting weight on the left side. We can also vary the joint motor strengths to have the biped act very stiff or like a drunken man constantly loosing balance in random directions (we have that mode :).

Like from 3:35min or earlier where we drop the box on the biped. The biped dropping on a ball is actually the showcase piece for true simulations for us.
There you see the arms and so on moving in the opposite off-balance direction. Pretty much what people do when off-balance.

(here goes a bit of self advertising: R.C. Bot Inc. is also all done with true simulation with all its replay value due to its unpredictable outcome:) )

btw, here is the thesis that belong to the video from the 2nd post

Cool, thanks for the link. I’ve only seen the short paper before, it’s great to be able to view the full thesis.

I don’t really understand Jacobians so the following might be completely wrong. You probably already know most of this but I’ll include it in case anyone else is interested. Jacobians are used to define the dynamic relationship between two different representations of a system. I think the most popular use of Jacobians is for IK e.g arm joint rotations or end effector position.

Studywolfs site has some good tutorials. It starts with a two linked arm and demonstrates the Jacobian between the end point linear velocity and joint angles angular velocity. It then shows the end point force and joint torques. In part 3 it shows how to include mass, inertia and gravity, so the full dynamics of a robot arm are 4806ddde522196c6367249bca36f891844cc96f4.png which tends to show up in a lot of papers.

He then creates a PD controller in joint space. The end result is a PD controller that cancels out the effects of inertia and gravity. For example, if you applied torque to the upper arm it would start accelerating but with zero input torque applied to the lower arm it’s local rotation wouldn’t change. This also means that the PD controllers don’t need high gains.

The previous example had a fixed base and the end effector moved e.g arm / hand. Another use for it is to apply virtual forces. The thesis link had a good example on page 87.

So if you’re standing on one leg and someone pushed on say the shoulder you could find what torques to apply to the joints to counter that force. Or if you want to start moving forward (start walk / falling) you could apply a virtual force at the pelvis. I think you might also need to pass in the ground reaction forces and it doesn’t work very well if the foot slips. From the existing projects it looks like the required simulation time step makes it impractical for real time games.

The Studywolfs site is a really nice read! We actually never really looked too deep into any numerical solutions but instead limited ourselves to two bone IKs which are coupled. They can be solved in one shoot with 3 cosine&sine relations analytically, then you feed the result into the joint motors as targets and strengths. Then you let PhysX do the rest.

If you look at Natural Motions implementation, which is also used in GTA, then I wonder how they got it efficient.
I would love to see the 3D One-Leg Hopper realized in UE4! How does it work? Is it like one stretch leg (2 bone IK) and then a balance mass on the little platform that is constantly adjusting?
It would make a fun little ‘boxing’ game having two of them in the arena.
Thing is though with games like this, there is no blood, no heads flying off and such. There will be no dreamy eyes looking at a blood stained chain saw.
But actually, with the one leg hooper, you can make a platform jumper as well. That would be quite fun. You could mount a chain saw on it.

For QuadroPed Robots, here are a few ‘walking’ and physics nonsense outtakes from our game. It is 100% simulated with no helper forces and so on. (we integrated many 100% physics based helper systems to gameplay fun…stabilized gyro flying etc.)
Walking is only used for fine adjustment of the robot’s position. It is just too difficult to get ready as a major gameplay element for all-terrain and with good performance.
The main physics based means of transportation are rocket thrusters, carrier, harpoons, catapult and ballons.
We did in a way that you can start the walking cycles from any position, like low, high, on three legs, etc…
Generally the 100% simulation approach allows to basically do whatever you can think of in real life with such a robot.
You can control and move around all limbs on their own if you want.
The first one is from a level, the second one from the training place that we set up for players to practice controls.

I kind of want to make an attack on titan game after seeing the boosters and grapple hooks in your game. I should be getting a Steam Vive next month. It would be kind of cool to aim the two grapple hooks with the controllers.

I haven’t looked at the 3D One-Leg Hopper paper for a while. It doesn’t have any balancing mass on top. It was really just 2 motors to control the direction of the leg and the leg was like a pogo stick. It might have also had a weak piston to input energy into the jumps. As a 2d example it would just be a long lightweight pogo stick leg (can simulate spring that doesn’t lose energy) and a single motor to control the leg angle. The upper body / base has a much higher mass so when it’s in the air you can change the leg angle without affecting the base rotation. It starts in the air and is dropped on the ground. You find out how much time the hopping motion spends on the ground (hits ground, compresses, springs up, leaves ground) and in the air. All movement is controlled by changing the leg angle when it’s in the air. This means you only need a weak motor that’s strong enough to move the lightweight leg. Whenever the leg is in contract with the ground the leg motor is basically turned off, the heavy upper body has a high inertia so it should remain upright as it tips over and springs back in the air.

It just sets the leg angle when it’s in the air and jumps around using the inverted pendulum model. For example, you could drop it down and tell it to jump 1m forward / right. It would angle the leg so it would contract the ground a bit behind the COG. Then the spring would compress as it falls forward and jump forward as the spring expands. It travels forward and angles the leg in front of the COG. There’s is certain angle that will cause the hopper to jump directly vertical on landing. Any more that this will make it jump back and any less will make it move forward. So you can control the velocity and foot placement. If you stick two hoppers together you can create a running robot.

The paper has a bunch of formulas. It might be better to use some kind of machine learning to avoid physics engine errors and maybe include base angle correction.

The one leg hopper is definitely interesting.
I tried yesterday with a counterbalance mass on top that is shifted in the opposite direction to the tilt of the hopper. No hooping yet, just the narrow capsule connected to the disk by a fixed constraint and then the counterbalance mass connected by a constraint to the disk. This constraint then uses a linear motor to shift the counterbalance mass and input for the shift is some center of weight stuff.
It works and balances on the tip of the narrow capsule, but after a while a spiraling motion kicks in until it goes down. Quite solid balancing for a quick try though. In the screenshot the hopper tilts to the right and the little balance mass shifts to the left.


It’s great to see this experimentation with physics-based characters in UE4 !
I’m one of the coauthors on two of the papers mentioned above, “Generalized Biped Walking Control” (GBWC) and “Flexible Muscle-based Locomotion for Bipedal Creatures”.

It looks like you’ve done a great job of getting the GBWC control running in UE4. The need for the small timestep comes from high-gains that can be used for one of the control components, the underlying proportional-derivative (PD) controllers. This can be addressed by using the ideas in the paper “Stable Proportional Derivative Controllers” (Stable-PD) , which should allow use of a 5ms time step, instead of a 0.5ms time step. For the control, it does not matter whether you use PhysX, Open Dynamics Engine (ODE), Bullet, or any other dynamics engine. However the Cartwheel implementation is currently written to use ODE. But implementing Stable-PD will require access to the internals of the simulator. We have implemented Stable-PD with ODE for a 5ms time step in one of our latest papers (Control Graphs, see below). The GBWC control does not use any helper forces (unless there is a bug, which is always a possibility, although we always do our best to ensure that this is not the case, most directly by seeing the character still falls if perturbed sufficiently).

The idea of Jacobian Transposes is really pretty simple. Imagine a jointed statue that always maintains its pose, i.e., does not move, even when external forces are applied. Imagine that the “base” link of your choice, e.g., perhaps the pelvis, is held fixed, and now an external force is applied on one of the hands. Given that applied external force, one can compute the torque that is applied to each of the joints that lie on the path in between the base link and the point of application of the force. The Jacobian Transpose does exactly this; tau = J^T * F. Now this can also be used in the opposite direction, i.e., if you want a hand (or a foot) to apply some external force F on the environment, you can therefore use J^T to compute the required joint torques needed to achieve this. The advantage of using Jacobian Transposes is that it lets you design control components, such as forces, that work in the more abstract (and convenient) Cartesian space of the end-effectors, rather than needing to worry about what happens at individual joints.

Lastly, a bit of shameless promotion of recent work done by some of the great students and postdocs in my research group at UBC, in case some folks on this thread might also find that to be of interest… see the three 2016 papers listed at:

Hope that these comments are useful!


Not sure how this would ever go for a competitive game - too much inconsistency.
For VR, the performance hit (currently) may just be too much.
Interesting though.

Would this style of learned locomotion be the end of animators or just a helping hand?