Unreal Engine Livestream - Character Animation in UE4 - Jan 25 - Live from Epic HQ

Hey all,

Here is a follow up to the questions that were asked that I didn’t get to on the stream. Hopefully these will answer your questions, if not we’ll do a more advanced verison of this stream in the future:


WB: Hopefully answering all three questions with this response. The Blend Depth setting on the Layered Blend Per Bone node determines how far down the chain of bones you want to start distributing the blend from.

For example, if we use the following Blend Depth on our Layered Blend Per Bone node:

LayeredBlend01.png

file:///C:%5CUsers%5CWES~1.BUN%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image001.png

It will take the 4[SUP]th[/SUP] bone in the chain and distribute the blend among the bones back up the chain.

LayeredBlend01.png

The blend is stronger starting at the Neck, then down to Spine 2 and Spine 1, weaker when reaching the Spine bone which is why there is less twist in the animation with a higher blend setting (and more twist with a lower setting as 100% of the blend is happening at the Bone Name we specify).

https://www.youtube.com/watch?v=ap7r0PTcHXY

While you can’t do a 50/50 split of the Hips in this case since that is the root bone, there is a way to negate the effects of the Layered Blend per Bone node by entering a negative Blend Depth specifying which Bone or chain of Bones to ignore.

For example, if we want the hips (pelvis) to follow the attack, we can take both legs and add them to the Branch Filters, setting their Blend Depth value to -1. It will negate the Blend on the chain of bones starting from the Bone Name specified.

In this example, we add the Bones for the legs and set the Blend Depth to -1 so that the hips follow the attack better. You will have to experiment with different settings to achieve the effect you want, but it’s already an improvement over the previous setup.

https://www.youtube.com/watch?v=CGPez5NlT5I

WB: The performance impact of cached poses is negligible compared to additional State Machines. Having to evaluate multiple State Machines is more expensive than taking a reference pose from the evaluation that already occurred on a State Machine.

WB: Pro is that you can visualize better, you could see the flow of the State Machine better. More State Machines, however, mean more depth and more evaluation happening.

WB: Montages cannot be placed in AnimGraphs, I probably could have told the Character Blueprint to call an event inside the Event Graph of the Anim Blueprint to play the montage instead. You could, of course, skip using montages all together and use the State Machine to handle combat animations; it’s a matter of preference. In the stream, I discussed why it may be advantageous to use Montages instead of a State Machine and using the State Machine to handle systemic animation logic.

WB: You could layer that on top of an existing animation. You may also want to look at using Pose Assets to handle facial animation.

WB: If they all affect the multi-directional navigation, for example, a broke leg and the character limps or hurt arm so the character holds it differently. I’d probably look at using blend spaces for the navigation and layer on top the upper body or lower body the conditional animations. You could also set up blend spaces for your conditions and selectively choose one of the blend spaces based on those conditions. Maybe we’ll cover this in a future stream as well, I’ll need to dig into it more.

WB: I’d say the later, you could use a Run on Server Event to call a Multicast Event to play the Anim Montage.

WB: Would you happen to have screens/video of the issue. Or a test case I could follow to reproduce it on my end to see what the problem is?

WB: Similar to the previous question, would you be able to provide some test assets for us so that we may diagnose the issue on our end?

WB: We can look into this. If it is not a video tutorial series, perhaps we can cover it in a future stream. I’d like to get Laurent to do one so I can learn from him as well! J

WB: I personally do not have a best practice for this just yet, however, I’m hoping that in future releases when we adopt some of the techniques being used in Paragon that we’ll have better features to handle this.

Take a look at the video I talked about from Laurent here at the 12:00 mark where he talks about Movement Prediction, Distance Curve (reverse Root Motion) and Speed Warping. I don’t have a timeline of when these features will be available, however.