FAQ includes content originally written for UDN.
Where are the retargeting settings to use humanoid?
- It was replaced with the IK Retargeter system in UE5.0. You can read about it here.
How do we set up ragdoll limits?
- Ragdoll limits are set up via the physics asset. Documentation can be found here: Physics Asset Properties Reference.
How do we control mass and force end poses?
- Mass is also set the same way ragdoll limits. As far as end poses are concerned, this would be done via the blend weight to go back to the animation: Physics-Based Animation .
Is it possible to run any animation functionality after the physics update?
- This isn’t currently supported, although it is functionality that we’d like to add in the future. If you are looking to do something like applying IK fix-up after physics, one option is to extract the target transforms directly from the event graph. But this will result in a one frame delay.
In order to fully support this functionality, a post physics anim blueprint, similar to the existing post process anim blueprint but run after the physics step, would need to be added.
What are the best practices for handling banking and turn-in-place?
- There are many approaches, but one option is to have the capsule rotate with the camera and counterrotate the mesh to keep feet planted. For turn-in-place then, at a certain threshold, trigger the transition.
What is the best way to handle transitions for complex movements, via conduits?
- Nesting functionality within hierarchical state machines tends to be the best solution. We rarely use conduits. State Machines allow for multiple transitions within a single frame, the default maximum being 3, so you don’t always need to set up direct transitions between states. You could pass through an idle State to go to the new State, for example.
How do we blend from one animation state to another without setting a specific transition in the state graph? Is it possible to create a transition “on the fly” and then take that one?
- No. But as mentioned above, you can flow through multiple transitions in a single update. This means you don’t have to create a transition between every single state. For example, you can have idle flow through jump, fall and land in a single frame. In this case you don’t need an idle to fall and idle to land transition, only idle to jump is enough.
What is shared between entities with common skeletons, are animations shared?
- If you have one AnimBP with all animations within it, they should all be loaded together. This gives a benefit to using Animation Blueprint Linking, animations within the linked tree will only be loaded when it is loaded.
Control Rigs can also share skeletons. Using a Setup Event in the Control Rig you can snap the controls to the skeleton joint locations in the case where the skeleton is compatible but they have different proportions
Should we create a Master Skeleton that contains all the bones ever required?
- You should create a Base Skeleton that has all the base hierarchy you will need and that you will never change. This base skeleton can share animations and still use different AnimBPs for each character. The base skeleton will become your master skeleton in Unreal as you import new Characters with additional bones. These additional bones for specific characters then get added and your base skeleton will grow. As long as the main body has the same hierarchy, the “leaf” joints will be added to the shared master skeleton in Unreal. You don’t have to create the master skeleton in Maya and have all your characters keep all the extra joints.
On import of a new character, you can select the existing main skeleton as the source, and new “leaf” joints will be added to the hierarchy. However, if you try to add a joint inside the existing hierarchy, such as adding an extra joint to the spine or neck, it will break compatibility and no animations will be shared. In that case, a new base skeleton will be created.
If we use a Master Skeleton will unused bones be evaluated at runtime or culled completely?
- Extra bones are not evaluated at runtime for characters that do not use them. There is no performance hit for including them in the master skeleton.
What does it mean that the Anim Blueprint is “templated”?
- A template AnimBP would be an agnostic, trimmed down version of a full hero AnimBP. This means it’s a neutral graph with a reduced set of nodes, playing a default set of animations on a mannequin character. When starting a new hero, we would retarget that template to the new hero mesh, and it gives us a fully featured and working character. Animators could then start replacing the default animations with custom ones. Custom abilities could then start being implemented, but the locomotion was already solved for.
How would we implement IK foot planting/wheel planting for multi wheeled vehicles/treads?
- Vehicles will typically do individual line checks for the wheels. Treads likely do the same. For legs, we would use IK bones parented to the root bone to describe the movement of the feet in ‘floor space’ (root bone representing the floor’s position and normal). The sample Mannequin skeleton shows off this approach.
We would project the feet on that imaginary plane, and then do line traces there against the real geometry. The difference between the trace hit location, and the imaginary plane is an offset that we can apply to the feet. So their animations are relative to the floor.
You can use prediction bones to make this less reactive and better anticipate rough terrain. That’s what we would use for human characters. Given that they’re more mechanical, you could opt for a more procedural solution.
What is your best practice for organization of Anim Blueprints with a large number of characters with potentially very different skeletons?
- This depends on which character setup you go with. One recommendation would be to keep the AnimBP in the same folder with the skeletal mesh it corresponds to. Further recommendations on project structure and naming conventions can be found here.
What would be the recommended method for organizing Animation Blueprints for large graphs? Would Linked Anim Instances be the best way?
- Linked Anim BPs are useful if you need to statically or dynamically link graphs. Montages are useful if you want to play short clips of animations, such as abilities or attacks.
In your experience what do studios’ pipelines look like for getting animation assets into Unreal Engine?
- There are many different options:
- Python Scripts
- JSON files containing the asset or layout data and executing those json files in UE in order to bring the data
- Shotgun ToolKit
- DataSmith + Visual Data prep
- Other proprietary solutions
How are multi part mesh and animation syncing supported?
- Multi part mesh is supported via the Master Pose Component and then using Copy Pose From Mesh to copy animation pose data to any child skeletal mesh component. When doing this it’s important to ensure that the parent is ticked before the child to avoid using the previous frames transforms. See the documentation for more detail.
What’s the workflow to add attribute curves on the animation clips?
- Export them from the DCC within the animation clip.
Are there any changes planned for how root motion is handled?
- We don’t currently have any plans to change how root motion is supported.
What optimization options are there in the animation system?
- There are many, some of the larger features include:
If the animation update is threaded, how is that threading handled?
- Animation is threaded, except in the case of root motion driven character movement (as this updates and evaluates animation ‘in-line’ inside of character movement’s tick). The system uses the task graph to run graph update, graph evaluation and a few other tasks on worker threads. Basically if it can be sandboxed enough (ie. not general blueprint execution) then it is run on a worker thread.
In what order do the evaluations occur from tick to animation update to animation applying?
- At a high level (GT = Game thread, WT = Worker thread)
- GT - BlueprintUpdateAnimation (event graph) & NativeUpdateAnimation
- GT/WT - Graph update
- WT - Graph evaluate
- WT - Interpolation/curve update
- GT - Notify dispatch
- GT/WT - Physics update
- GT - Buffer flip
What is the data flow in the animation system? Where and when do we have access to what?
- The SkeletalMeshComponent always holds the most recent pose. So until there is an animation update, that is the last frame’s pose. After, it will be the current frame’s pose. An animation update is split in 2 parts:
- Update the graph hierarchy (tick graph from root to leaves to set weights and advance time).
- Evaluate (from leaves back to root). Here the pose flows through each of the nodes to produce the final output.
What data within the level can you safely look at or modify in your own AnimGraphNodes?
- You can perform queries on the scene, but actor/component data should not be accessed from Anim Nodes. Anim Nodes should only operate on data in the FAnimInstanceProxy.
We’re using the Update Rate Optimization (URO) to reduce the tick frequency of our characters but that doesn’t seem to make a noticeable difference to game thread performance. Where in a Razor CPU capture should we be expecting to see savings?
- URO mainly reduces Update and Evaluation work which happens on worker threads not the game thread. Refresh Bone transforms should take slightly less time when URO causes a skip as we won’t set up the worker thread tasks.
Is there a way to put some blueprint script to run (on the client) after a Montage finishes?
- Yes, you can bind to a delegate to execute some logic after a Montage has executed. You can do this in the montage itself, or manually if you’d like. You can take a look at the PlayMontage blueprint node as an example of the delegates you can bind to. See UPlayMontageCallbackProxy for more.
What is the cost of using Morph Targets? Are there best practices on how much and when or where to use them?
- We don’t use them for performance reasons. Morph targets are heavy on the GPU and the performance cost scales with the number of blend shapes that need to be evaluated. Morph targets depend on the vertex indices being the same between blend shapes to properly interpolate, so mesh edits can be painful. In general, we recommend using Animation Pose Assets instead of Morph Targets.
How do you query animations that are not currently playing?
- There are exposed APIs that allow this. If you have a reference to an AnimSequence asset, you can access it and start looking at the data.