Anim Graph - Getting, messy. How do I fix it?

HI All

I am only 1/4 to 1/3 way through it and it is getting messy. How do I make this clean

I would recommend using Blend Spaces to merge some animations. For example, one usually uses Locomotion Blend space for idle-walk-run-strafeL-strafe-R-back. There there is only one node where you plug in you speed forward and side and it will give you the animation. I guess it also works with root motion if you are using it (haven’t tried that part). You can have several blend spaces and use the transition between them (one for crouching, normal state, ironsight …). Also, I would recommend the jump animation to be composed of 3 parts: start, loopable in air, end; this is only relevant if you can have variable in-jump durations. But if you have many states, the state graph will be messy.

Another organizational part is to use animation montages. For example, for all attacks/abilities, we use animation montage. You can then use it to overwrite the default motion, or just some parts of it. A typical setup is to have montages that only change upper part of the character and montages that change motion completely. With the multi-montage system, you can also have more than one montage playing at the same . For example, auto swings/punches could only animate upper body, still allowing you to move around freely (legs would still be animated correctly). Abilities that require all character to move could overwrite the whole animation.

The third possible organizational thing is that you could have some animation running additive on top of you animation graph. For example if jog and ironsight jog are almost the same, only ironsight is moving hands a bit more, you could composite that part with weight in the AnimGraph. As a bonus, you can use weight to smoothly transition into ironsight mode …

I hope this helps!

Ultimately you want to have a basic State Machine for Locomotion like they have set up in Shooter Game, which I recommend as a reference on how to set up your AnimBlueprint (although a C++ project which makes it difficult to know how variables are being called):

Then in the AnimGraph you use Blend Poses by Bool Nodes to blend between the upper body animations and what weapons the character is weilding (for instance) and the lower body that is transitioning between the various states of the State Machine:

You can also have various State Machines like one for locomotion and another for different upper body animations and reference them in the AnimGraph pictured above.

Addendum: this saves you from all the extra states of each individual animation for run + ironsights, walk + ironsights, by just adding “ironsights” upper body animations to the the lower body animations in the State Machine.

How does one do that ?

While kinda on the topic, I noticed every example I see goes from jump_end back to idle. I have messed with this, and it just does not make a clean animation if you are running and hit jump… when you land the feet set to idle and you ‘slide’ across the ground before your character starts running again. Was curious if you could add a link from jump_end to running and just do something like this for it:

Of course I tried it and do not see the difference… one of these from jumpend to idle and one from jumpend to run… seems if you are running, it should work, unless the jump for some reason kills the isrunning variable.

This is achieved at a somewhat higher level, so I’ve requested some input from one of our developers. However, due to their busy schedule, I cannot guarantee a response. Perhaps another user has experience with this workflow?

In the idle state you would transition to a Blendspace “Idle to Run” so that if the character is in motion, he will transition to a run animation instead of being in motion and sliding in the idle animation. This set up is shown here:

https://www.youtube.com/watch?v=7b9WM8TVdpA&list=PLZlv_N0_O1ga0IoRrpI4xkX4qmCrhGu56&index=7

Adding to the conversation movement systems really has not changed all that much in design over the years and has always depended more or less on some form of state machine that tends to grow to the point that they themselves tend to collapse in on themselves making it difficult to debug or allow for any form of run error correction. It’s almost a given if one sees a “glitch” in the animation flow it’s almost a given that the error is nested inside some kind of state (machine) based on an argument that has no chance of getting out of.

So personally as part of the design I’m not a big fan of the implementation of a state machine as they demand more and more arguments to get out of one state into another and arguments as to what point with in the state the animation can leave the state creates a level of player movement latency that compounds as each state adds to when it can exit from one state to another.

So that’s the state :wink: of what I would call the use of old school tech that generally gets default to because it’s familiar and good or bad it just works.

With Unreal 4 though, as an engine in constant state of being improved, the question in my mind is what feature additions would be considered nextgen as compared to features added just to make it work and the additions of blend spaces and per bone blending as well as additive as well as absolute animation makes it possible to construct your animgraph so that it’s 100% data driven.

100% data drive =

Highly reactive control input as the action does not have to wait for an argument to be fulfilled.
Can dynamically expand as to need as well as impulse
Much easier to implement error correction as the flow can not get locked into a state machine with no way out.
Can easily be adapted to any form of input control including VR.
Easier to “read” the animgraph if you have to pass the work onto someone else.

List can go on and on but as an opinion if thinking nextgen tech there is no need for a state machine and with the above mentioned features in UE4 best practice would be to avoided using them whenever possible as the usual result is as you described as being messy and would need to be rebuilt with each inspired addition.

To give you some “ideas” as to where to start I have done a few purely design theory videos to more or less prove to myself that the concepts do work so not so much an effort to tell you how to do it but rather to make it a bit more clear as to how things should flow from in to out.

And this is what really got me to thinking

Overall think blend spaces as they blend. The only thing that would make them better to best as compared to the use of a state machine is if a feature was added that would larp across like an aim offset with out having to build a black box. :wink: