Blending animations for a clickable cockpit

Hi

I have a clickable cockpit simulation I am working on. It is loaded into my actor as a skeletal mesh loaded from a gltf file.
I have had some success with using Layered Blend per Bone but it is quickly proving to be rather impractical for a complex aircraft with intricate landing gear and numerous cockpit controls and indications.

What would be the quickest and cleanest way to add all my animation sequences that are scrubbed by variables into the final pose?

Do you mean you have a 3D model animating around the cockpit (like an arm and a hand) moving towards all the buttons and gear?

Cleanest way is custom animations for all buttons. This might be too slow to be practical (move from default position, move to button, move back to default, move to next button). When flying fast aircraft you are all over the place with two hands at once just to land instead of crash. I’m a sim player myself.

The flexible, fast but less accurate way? to do IK for the entire setup (inverse kinematics), those are procedural animations.

Chicken Head GIFs | Tenor

Basically you create a cue of buttons pressed and let IK figure out positions to go to one by one. At arrival, you’d play an animation on the hand only, like hitting a switch.

The problem with IK is that you are also dealing with a ton of collisions in a cockpit that IK would have to move around instead of through. Like it would look bad if you go through all the effort to have a realistically moving arm and that arm is moving through the flight stick to flip a switch in front of it.

You can set up IK to avoid positions and solve a way around them I suppose, but that’s rather complex. You could build a 3D grid beforehand of cubes that are “air” and cubes that are “collision”

To get what’s air and what is collision you can do a floodfill, which works in both 2D and 3D.) Storing that otherwise uber complex collision check of the cockpit into a few cubes can be used for quick IK calculations.
Basically, if you path through a few cubes of “air”, you can then smooth out a curve from A to B and feed that to IK over time.

For the IK solution I’d know how to do that for the hand itself, and let the arm (elbow) IK to the hand on its own, but getting that done for the entire arm doesn’t pop up a direct solution in my mind. that’s part of the IK complexity.

Most likely, especially because it’s a sim, you don’t want to use a skeletal mesh of the person in the cockpit for gameplay.

There’s also the option to just render the hand and have the arm invisible.

No animated arm or anything like that. The cockpit is meant to be empty of the pilot body.

The issue is specifically with the interactive parts of the aircraft. For example if I use Layered Blend per Bone, it doesn’t all the gear lever to move through it’s full range of motion when there are other animation sequence evaluators connected.

Ah! right, can you post the animation blend setup of the animation blueprint in a screenshot, and maybe a video of that result? Windows snipping tool or GifCam can do that.

Best to look at one lever at once. If I get that correctly your blend is causing a result other than a 1:1 animation you are intending on parts of the cockpit.

Since it’s al clickable, what about the levers? In a flight sim a lot should be draggable to exact positions, not matching a pre made animation (unless you somehow snap to a position in time on that animation).

It is meant to snap to that animation on a click.

It’s not just levers that I’m concerned about, its the whole aircraft’s animations. In this setup, I have the left main gear and the gear handle animation sequence evaluators blending.

Attempting to directly control the node transforms in code would be too much effort, which is why we usually do pre made animations in the max file and if we were to drag we would use the mouse drag output to scrub the animation by variable instead of UE5 playing the animation over time.

Sadly I don’t know about animation sequence evaluator nodes, but I can take a look. My animation blueprint knowledge comes mainly from studying the free ALS plugin in BP and C++. I’d have to dive into that node. Are those nodes supposed to connect how separate parts of the airplane animate in relation to eachother? Something tells me that a blend node would be wrong for that purpose but I don’t know enough about this one :slight_smile: .

I’d probably store a lever “position” (or time in animation between min and max) as a float value, and read that float value on a part on the plane to interpolate an animation state for that part over the duration of an animation sequence for that part in specific. Basically comparing one curve to another, or one animation to another. (yes, it’s code based)

I’d blend if I want to get an average result of things instead of a pre defined animation over time.

Next I wonder how much you could avoid code driven animation. In true simulation you also have to deal with external forces. On the airplane (code, animation) level you could deal with the gear freezing, or gear misbehaving (even separating) at high speeds for example. But when you get to external forces you can’t (or shouldn’t) store all those in the animation system of the airplane itself. They’d be stored data on external classes (weather, atmosphere, temperature) or even local (airplane damage). Probably… it’s more code driven than you expect.

Some time ago I worked on a Pawn Movement Component to implement it as 3 parts. A reimagination of the character movement component. The 3 are: “pilot”, “movement behavior”, “Unreal behavior (setting position, depenetration, sliding etc.)”. Basically the pilot is aware of plane movement behavior and sends a request to get towards an intended result. External forces would work on the “movement behavior”, because the component is set up to listen to its own and external forces (observer pattern, working with delegates). The animation system would pull data from this system to generates its animations. In a non sim (just any arcade game) you can get away with pre made animations, but the more you move towards sim it becomes procedural. You’d still use raw data like curves or floats (like setting up datatables with plane movement behaviors defined over X amount of properties (floats, curves etc.)), but use animation sequences much less.

Some (very much) simpler cases you’d be better of with playing and reversing an animation montage on request (the click). I’ve always wanted to avoid animation montages and integrate all logic within the same state tree of the animation blueprint but that led to (engine) bugs.

Edit* ALS plugin does use the nodes, might be a valuable tool to check if you’ve done the implementation alright and if they do what you intend to do:

Link Github

Link FAB

I built my own branch of it ported from 4.27 (my C++ version) so, if they changed anything, I still got the stuff I posted on the screenshot.

The Sequence Evaluator nodes essentially allow me to control the position of the animation scrub with a variable. If I pass the keyframe / framerate as a variable to the explicit time input, I can move the part as I wish. Eg let’s say I have an airspeed indicator that indicates 0 to 800 kts, animated in the max file so that the needle points to each 100 kt increment every 10 frames. That way I can take my indicated airspeed variable, divide it by 10 to get the keyframe position, and pass it through the keyframe/framerate function to get the explicit time which sets the needle to the desired indication.

That’s why pre-made animations in the max file are preferable to direct control (although I could add extra code driven layers on top of that), because the non-linearity of some instruments can easily calibrated by eye without constant trial and error adjustments in code to calibrate. They also allow us to group multiple animated parts together in one single sequence, especially if there is linking in hierarchy going on.

So to answer the first question, an Animation Sequence asset defines how a group of parts are animated, and the Sequence Evaluator node allows me to set the keyframe slider for that animation at any point I wish by variable instead of letting the engine play it by time.

Blending does seem to be wrong for what I want to do. It’s closer to Additive animation because I want to combine the total result of multiple independent animation sequences on the same Skeletal mesh that makes up my aircraft model, not interpolate between one and another.

Unfortunately, trying to use Apply Additive nodes results in the model flying apart into pieces.

1 Like

This is what I thought. Does that ALS animation blueprint give insights?

Nothing so far. It still looks like just playing animations, where as my desired setup is one where the animation sequence is paused but scrubbed constantly by a variable to move the keyframe slider to any position.

I suppose blending / blend spaces etc are just not for moving parts individually like that (but create an average). Odd I’ve never ran into this case :slight_smile: I doubt I’ll be of help

Blend Spaces in Unreal Engine | Unreal Engine 5.5 Documentation | Epic Developer Community

You’d think each part gets some kind of its own local blend space between state A and B taking X as interpolation alpha, and that this alpha can be communicated between parts for their own local blend space properly. Just haven’t worked with animations enough to come up with a screenshot of that in the animation blueprint.

This is starting to sound like I will have to write my own plugin that pulls individual bone transforms from animation sequences and applies them as I wish.

:face_savoring_food: Hope someone else takes a look a this post and can tell what we’re missing

(post deleted by author)

It seems they made this impossible. Every where I look the very idea of non time controlled poses for Animation sequences seems to be deliberately closed out.