Is there a way to ride/layer (and simultaneously control) additional Skeletal animations and/or MORPH Targets in the AIM_Offset Interface? (the one with the Record button)
and
Is there a way to get synced audio to play when recording an Aim Offset? (perhaps even adding successive Tracks the way I do in 3ds Max as explained below).
I have been a character animator for 30+ years, and two tricks I know to massively increase ‘believability’ (as opposed to ‘realism’ as defined by Ed Hooks author of ‘Acting for Animators’) is:
to have the eyes ‘lead’ the head arriving at the point of interest early and
Have the character Blink when they turn their head (beyond a certain obvious minimal speed), opening them (naturally…) just before arriving at the point of interest …
I am able to do this very effectively using 3ds Max and its ability to ‘wire parameters’ along with the oft overlooked (or outright forgotten) but very powerful: Multi-Hardware (MIDI, Mouse, Wacom Pen, Joystick… 4D Mouse)
‘Motion Capture’ Utility …
Which puts both the HUMAN and the ARTIST back in the directors chair, not rotoscoping, video captures (or God Forbid an AI ‘emotion algorithm’…)
(and there is a timed synced soundtrack playing every time you push record, which helps enormously with building up, elaborate multistep/timed acting performances in what is usually dead looking: NPC dialog!)
I even wrote a step-by-step illustrated book/tutorial about exactly how to accomplish this called:
'Performance Capture: ‘Artist Driven Motion Capture in 3ds Max’
(I be happy to share a copy… especially or some help) The input capture tool can be used to animate ANYTHING (even a Light’s Brightness) and with multiple parameter tracks … endless power,…
I sure would like to emulate that realtime recording system, with granular control over the eyes (and Blinks morph targets (or animation clips) and Squint and Smiles and and and) to do all that in Unreal it would save me having to Import everything in from 3DS MAX.
If you want to see the result look at pretty much any of the animated characters on my youtube channel:
I recently did an animated MetaHuman tests with this technique.
I think you might be looking for an inverse kinematic solution to be applied after the base animations, to control eye rotation. This is similar to positioning feet on changing landscapes or moving hands towards a wall surface.
Just change the order of operations in the animation blueprint.
You can have things happen before or after an aim offset by just injecting them before or after the aim offset node…
You can add curve parameters to any morph target in order to override the original value.
You can even pull the original value and add to it with some work.
In theory, because the end result is packed back into the same morph target at runtime, you shouls be able to re-record the performance to get a clean animation with the overwritten values.
Ps: I really dont see what you mean by recording an aim offset, its usually just a set of 8+ extreme poses by which the final position is calculated based on an axis value.
Below the Aim Offset utility, there is a Red Dot Record button, WHen I click record, I can make the Characters ‘act’ looing this way and that… (IE watching a car pass, of an explosin in the sky, he looks up! then I click stop and it is an animation file! SO, if I devise a way to have the eyes leading the head that can be a great ‘looking track’ (I will be illustrating this this week actually on my channel: youtube.com/NextWorldVR ) where I go to the opposite extreme (as discussed) and animated the head AROUND teh eyes whcih remained cenetered, (which makes for an excellent 'head bopping motion while conversing, talking . acting.etc) I gues f I can get that part I can maybe combine the two that I make as FURTHER Aim Offsets or? I am just getting into this part of Unreal Engine (having written a book about this exact technique, (multi track Motion Capture and ‘anti-eyes’ for 3ds Max!)
(if you want a copy I give them for free, fully ilustrated and step by step with project files!
An animation track is data.
Personally I would never resort to using the engine to record or clean up data.
I know some people do (don’t see why or how since its not clean by any means, not even on quick stuff).
That said, you would be better off using the take recorder to animate the movement procedurally.
Meaning.
You add a “look at” node which drives the direction, and you make that direction rotate your eye bones first with limits, then apply the head aim offset a frame or 2 later (or use game time).
Since you script the timing of the function, say maybe with a timeline node, you’d always get very similar lead results. Or exactly the same lead results (May depend on fps while recording there).
Once you have the take, you would export the animation out of the engine and clean it up as you normally would.
At the very least you’d eliminate some frames to smooth out the movement where needed. If needed.
And yea, its better to use bones for the eyes to animate them than to use the blendshape.
But you can also do the math manually to convert the direction you look to X and Y values to pass into the blendshape.
Its a bit more involved. The positive is the blendspace conversion leads you to clamp at 1 and 0 to automatically stop the eye from over rotating…