Mobility MoCap Pack 1 - 136 Mobility Motion Capture Animations

Thanks! Yeah, I think I’m getting your points. The finer details/decisions of animating the root come down to how any one animation is intended to fit or interact with other animations, by player real time input and/or design. That’s why having a specific controller/design in mind can ultimately dictate those decisions, and including a controller with the pack would validate and implement those decisions. And be a big value added feature of course.

An included controller for Ninja would be a ways off, so it seems in lieu of that we just create root motion that follows the the fundamentals and “likely” basic usage. There is no telling how all users will end up implementing them, so delivering a solid baseline version for now seems the way to go. As we add more animations to fill in gaps and add functionality, along with a potential controller, adjustments could be made so all play nicely together. Still like to put the first version out to a couple of beta users to get some initial feedback.

You mentioned root motion is not always an ideal design, we’ve had a couple of experienced developers/programmers using the packs tell us they prefer not to use root motion and never do.

And yeah, we always include the source FBX’s and a basic edit rig for Motionbuilder. The packs were originally targeted at developers and CG houses that couldn’t afford a custom studio shoot and just needed solid “raw” animations to work with at a bargain and edit or re-purpose themselves. Making them totally plug and play is an ongoing process. Glad to see guys like you taking advantage and getting your finger nails dirty. :slight_smile:

I just wanted to chime in and say that Motus have been awesome to work with - so supportive and open to suggestions for improvements.

I’m looking forward to continue working with them and to release a controller in the not too distant future.

Thanks for the superb zombie sale! :smiley: Heading to try them now.

For some reason, I have to import every single animation after adding the content to the project. Meaning the FBX import dialog opens up etc. I guess this is because they’re not targeting a (the UE4) skeleton. Is there a way around this?

EDIT: Hang on, after cancelling the import, all animations seem to be present. Still a little puzzling…

That is an odd one, not sure. :confused:

Actually I have a theory. When we create the packs the “Source” folder is typically added after all UE4 content is locked down, tested and finalized. “Source” contains the original FBX files, editing templates, motion list, etc. When opening the pack project for a final check after adding “Source” I have noticed occasionally it will think the “new” FBX files it finds are to be imported and initiate the dialog, which I cancel and it never tries to import them again on reopening the project.

This is the first I’ve heard of a user seeing this, but that seems like what’s going on.

Oh, OK, no problem. At first I thought I had to import the 166 animations one by one… :stuck_out_tongue: Great animations, btw!

Zombie Anims working very nice so far. Only maybe missing fall down anim to enter crawling. But looking awesome. :cool:

Nice! Good suggestion for the fall to crawling, it goes on the list for the next shoot.

I really like the animations form the Mobility pack and am now starting to use them in my game. Being a programmer rather than an artist, the animations stuff isn’t really my home turf, so I have a few questions about the “philosophy” behind the set of animations. Probably basic questions, but please bear with me…

  1. There are animation for walking, jogging and running in various directions (plus the corresponding start/stop animations), so I would usually go ahead and make a 2D blendspace for them. Now, there are also some additional takes like “run, turn 90 degrees, run” that somehow would have to be properly transitioned from the blendspace (how? Should I first accelerate/slow down the character to the appropriate speed, and then apply the turn animation?). Is this the way the pack is meant to be used?

Alternatively, I could also leave out the blendspaces and just use the animations, giving a more discrete behavior (which would be OK for my game). The mentioned transitions would be trivial, however there are no transitions between the walk-jog-run anims (just with stand_relaxed), so changing speed would be very visible.

Final possibility would be to use Animation Montages.

So which method should I aim for?

  1. There are separate animations for stopping with left or right foot up. The question is: How do I know which one is up? Should I use animation notifies for that?

  2. The Start animations don’t seem to blend perfectly to the walk etc. animations. E.g. if I first play “stand_relaxed_to_walk_f” and then “walk_f”, the character is leaning slightly forward at the end of the start animation and then suddenly straightening up after the transition. Can this be mitigated by some settings?

Sorry again for those basic questions :rolleyes: And thanks again for a great animation pack!

  1. Root motion takes to data driven as a duck to water and the data you need for non-linear motion is built directly into the data so the speed of the turn, or even moving is controlled by the speed of the mesh and the direction controlled by the direction the root is facing. These two requirements makes root motion just another data set that can be use in a 2d blend space as a 2 dimensional array. So in code speak the “data” could be structured just like they would be able to use any kind of array. As a logic animation is nothing more than the translation of data over time and space and with two variables built into the same data set there is no need to control velocity by the use of a speed variable with in a blend space.

A typical configuration of a 2D blend space would be for 8-way movement array that corresponds directly to a typical WASD keyboard input where the movement will be either 1 or -1 with zero being idle and the blend spaces XY range being 1 and -1. Also would work using a joy stick as well. The long way around to say a blend space can be used as storage that has an address that by input alone blend into and out of one address into another.

Once you had the matched blend space set up things gets a bit more in depth when you want to implement the animation “data” with in an animgraph or eventgraph as it now becomes more of a maths problem based on the kind of result you are looking for. This is why I recommend that the money maker as far as nextgen goes is not in the animations but in smaller packages that includes the addition of matched 2d blend spaces.

In any case once you have a matched blend space you could control all aspects of the “base” by modifying the behavior based on input into the event graph that could included speed and direction changes that is applied to a set of matched blend spaces and not have to drive yourself crazy using unique sets.

  1. A notify is usually the best work around but once again putting everything into a matched bland space makes it a math problem to solve.

3)Typical problem as for linear transition it’s about easing in and easing out a transition. Using a state machine this would require and exit argument of if on change exit at 95% of the current state. Using a root motion blend space you only need to change the blend weight. Once again though you could put all of your animations into a single blend space and just let the blend space do all of the blending for you.

Still confusing I bet as the overall design and addition of root motion is rather new nexgen tech and is very robust as it is not bound by the rules of a movement component but does lack the need for a sound knowledge base where Epic’s own documentation is rather weak and confusing (sorry to say). On the other hand “conversations” of this nature is important as there is yet a “standard” as to how animation as a data set needs to be configured which more than anything is causing confusion of yet again haing to relearn how someone did something based on something you should already have learned. :wink:

So philosophy wise root motion is just another data set and using blend spaces, blend animation by layering, you can build up a much more complex movement system using logic blocks instead of trying to shoehorn a state change into a state machine.

I did a primer for our group as they were have trouble understanding the design.

P.S.

As you can tell I’m a huge fan boy of root motion as "finally’ my run animations no longer states around like Tanya Harding (aka badly:D)

Tried all !@#% day today to get the animations working (first time doing ue4 anims, so bear with me)… and I couldn’t get anything usable, sadly. The retarget completely failed on the epic skeletons (Oculus char from the showdown demo), some humanoids from another epic pack (can’t think of the name, been working on this game for 2 days straight now), and even the default epic character… it all goes nowhere :\

Sadly, there are no step-by-step tutorials i’ve found to go from having a basic skeleton rig + mesh, and get it animated + controllable. And all I’m trying to do is get idle, walk, and turn… maybe a sit if i can figure out how to actually animate characters inside of ue4 (I’ll probably just do a fade in/out to seated position).

And I haven’t even gotten to doing the complex animations for the npc’s yet :frowning:

@FrankieV: Wow, thanks for the detailed explanation. Maybe my question wasn’t so basic after all… I don’t really want to derail this thread with general animation topics, but on the other hand this might be of interest for anyone using this animation pack, so:

  1. Great video, but due to the resolution I couldn’t really see what kind of nodes you’re using in the anim graph. If this isn’t too confidential, could you maybe post a screenshot of your setup?

  2. As I understand it, you’re using the 2D blend space as an easy way to store and index the required animations. So you leave out its ability to blend between parameter values (like have the player move at half speed between walking and running) and just allow the speeds for which you have animations?

  3. How do you apply state transitions (like start/stop walking)? Do you use state machines for that, or also set some variables in the event graph and “manually” blend between animations?

I’ll let Frankie_V and others continue the deeper programming discussion :), but I can add a comment on the stand to walk/jog/run transition animations.

Originally in these transition animations he took two or three steps past the first step into the cycle, so anyone had the full motion to use or alter as they see fit. UE4 has simple tools to truncate frames from the beginning or end of an animation, so the end user could trim where they want or use the whole thing. After getting feedback and requests we trimmed them all to the first right foot up step matching the phase of all the loops so they would be more plug-n-play so to speak. There are minor differences of the exact posture, arm/leg position, etc. in this slice of time at the end of the transition compared to the posture etc. of the edited perfect walk cycle. With additional animation work the poses could be matched exactly (like all the standing and crouching stationary animations have been), but even if the pose matches perfectly there is still typically a noticeable “pop” between the two as the velocity(speed and direction) of the hips and arms/legs are still slightly different from the cycle. More animation work can fix this of course by manually editing curves or non-linear editing to blend into the cycle, blah blah blah

Bottom line, all the additional animation work can be time consuming and ultimately an unnecessary moot point as blending can take care of these minor differences in real-time, easing in and out of the transitions as Frankie_V put it. A lot of the time the engine will do a better looking job in context on the fly than a hand animated fix.

Similar thing with the cycle to stop motions. The full length original animations are included, but also Left and Right foot up truncated versions. Player input could tell him to stop at any point in the cycle, so the engine can pick the left or right foot up animation depending on which is a closer match at the moment and blend from there to smooth the transition. Run to stops are a definite compromise as in real life (and the full animations) a person doesn’t stop on a dime, it takes several steps and leaning back to slow down without falling over. Depending on how responsive game play is designed to be you could leave in the long realistic “whoooaaa” travel to stop, or cut it short with more needed blending so he stops more abruptly.

Long answer to short question, thought I’d address that. A long time a go we were getting a hard time on “another engine” store saying we need to stop wasting time and their money pose matching or looping anything as the engine could blend “everything”. We didn’t think that was a the best idea. :wink:

Well the use of root motion in a video game is rather new as introduced in Unreal 4 so there really is no basic questions that is even part of the a knowledge base as compared to in-place bound to a movement component (aka bounding box) that’s been around forever and would be easy to explain with a simple tutorial. I guess you could call RM a concept and not a process as in the animation being another data set and not just a bit bucket of bits and bytes moving around.

  1. Best I can do at the moment but when I have the time I do plan on doing a “concept” BP using fair share animations but really this is a need for Epic to supply yet another sample :wink:

  2. Well as authored the speed of the player model is controlled by the transition of the source from A to B so if the cycle starts at 0 0 0 and is a run forward and the cycle ends at 0 0 50 then the distance traveled over X number of frames is the speed variable. This is where matched sets of animations comes in as the sets and velocity as to directions should be balanced and look natural as in the run backwards as to speed in relationship to running forward. In other words the overall speed is not as important as compared to matching how the transitions would look moving into and out of the current state.

To correct for “required” game speeds once you build and place a blend space in the Animgraph you can control the “matched” set of animations by adding the scale pin to the block and adjust the entire BS with a fixed value or variable.

So yes you could could make or edit fit to finish your animation data :smiley: or even adjust the entire blend space to fine tune for the desired speed or modify speeds during run time. The best part is no mater what you change the speed scale to the feet will always be on the ground no mater the speed change.

  1. This is where the ideals of concepts comes in as RM is not limited to how much you can stuff into a single state machine but how the current state can be changed with out arguments coming from the event graph.

To give it a name you could build an 8-way movement blend space that accounts for every direction change inputted from a joystick or the keyboard and drive all of your blend spaces through a Blend by Int. The blend space “I” would make then would have the idle at 0 and interpolate the direction changes in the event graph. The flow in any direction would be idle>start>walk>run>stop.

Since there really is nothing as to best practice I would suggest that if one wanted to use root motion for all animations is to avoided the use of state machines as even using a simple Blend by Int you can change a state with out arguments by evaluation and modification already done in the event graph.

So

In the case of starts and stops they could be just another 8-way set as part of yet another blend space that you can set the Blend by Int and the animation migration would simply jump from one to the other as an absolute and not by an argument in the state machine that says I sorry can’t do that right now.

Sorry for being wordy but when it comes to the use of root motion, as it just being another data set, there is no right or wrong way of doing things as it just works as an absolute, as in 1+1 = 2, than what is available in-place as yet another art asset hot glued to a movement component.

Thanks guys, lots of useful information. Of course I still have questions :slight_smile: but I think I’ll first have to dig deeper myself and get more familiar with various animation tools that UE4 provides.