Useful methods for handling character eye movement

I’ve been working a while on a project to synthesize realistic character eye movement. Still very much a work-in-progress, but I figure I’ll share what I’ve found useful so far. Please jump in with any thoughts or contributions - I’d love this to be a community thing, since it’s something any of us doing characters in VR scenes are going to need.

The first thing we need to do is figure out where the player camera is. We do this by finding the location of the player camera and then applying an offset for the HMD’s positional tracking. For this, I’ve created a blueprint function library and added a pure function named GetPlayerCameraLocation, since I’ll be calling this from several other blueprints.

Here’s the signature:

And its implementation:

One improvement I’d like to make to this method is to read the IPD and return, in addition to the camera location, the offset locations of each eye. This method returns a location between the viewer’s eyes, and when looking closely at a character in scene who’s looking at this point, it can be apparent that he or she is looking between your eyes, rather than into one or the other as people actually do.

To test that this is working for our character, we can do a very simple implementation in our Anim Blueprint. She’ll look like a robot doing it, and spin her head entirely around if you walk behind her, but it’s a good first test to make sure the camera location is being found correctly.

  • Create 3 new **LookAt **nodes in your anim blueprint (Skeletal Controls -> LookAt).
  • For the first of these, select your skeleton’s **Head **bone as the Bone to Modify. For the other two, select the skeleton’s left eye and right eye.
  • Leave **LookAtBone **unset.
  • Expose the Look at Location pin and the **Alpha **pin.
  • The Look at Axis will depend on the way your particular skeleton is constructed. My model looks along the positive Z axis.


  • We can then place an instance of the **GetPlayerCameraLocation **function we previously defined, give it the owner pawn as a world context object, and feed its return value into the three respective nodes, setting their alpha to 1.0 for now while we’re testing the values.
  • (Note that in a production environment, you’re not going to want to feed the results of a pure method like this into multiple nodes, since it will be re-called for each node, wastefully generating identical results - you’ll want to set a member variable in your event graph and reuse it where you need it, but for our initial testing, we want to keep things simple.)
  • (You could also get away with using just a single LookAt node for the head for this test, since the eyes aren’t going to be doing anything different yet, but later on, they will.)

Just wire them in sequence, ideally from a simple pose animation so you can evaluate your head movement on its own.

The next thing we’ll want to do once we’ve verified that our character is looking in the right direction is to apply an Alpha value to the LookAt nodes so she doesn’t try to look in unnatural directions.

For this, I’ve created a fairly basic library method called GetLookAtAlpha.

It’s a pure method, and accepts as its arguments the Anim Blueprint’s Pawn Owner, the location we want the character to look at, a “Threshold Angle,” in degrees, and a “Max Angle” in degrees. It returns a float clamped between 0.0 and 1.0.

The method returns an alpha value of 1.0 when the player camera is within ThresholdAngle degrees of the character’s forward vector, and returns 0.0 if the camera is beyond MaxAngle degrees from forward. Between ThresholdAngle and MaxAngle, it simply interpolates between 1.0 and 0.0 as the player moves further from front.

There’s definitely a lot more room for finesse here, but as a basic solution, it produces pretty good starting results. You can add a bit of detail to your character early on by using higher threshold values for the eye lookat alphas than you do for the head. This will cause her to continue tracking the player with her eyes further than she will with her head, creating a nicely lifelike effect at the alpha boundaries.



We can test this as well by wiring it directly into our Anim Blueprint, and again, once we’ve verified that this is working, we’ll want to optimize by getting these pure methods out of the Anim Blueprint and into the Event Graph’s Update Animation, where we can cache their values for reuse.

For a basic test, though, this works, and we can test the effects of using different threshold values for the eyes and the head:

Once we move these methods over to the Event Graph, our Anim Graph will look something like this:

For this next bit, let’s get our character blinking at a lifelike cadence. This bit assumes that your character mesh has a morph target available to it which closes its eyes.

We’re going to create three methods in our Animation Blueprint to support this:

  • Start Blink
  • End Blink
  • Should Blink

And we need one member variable:

  • LastBlinkTime (float)

**StartBlink **is a really simple affair. It just sets the EyesClosed morph target to 1.0, sets a timer to call EndBlink after the blink duration has expired, and sets the LastBlinkTime member variable to the current game time.
Blinks happen fast enough that so far I haven’t found a need yet to interpolate the blink morph target values. Simply flipping them from 0 to 1 and back again seems to work pretty well.



I store my morph target names in member variables:
My mesh’s Eyes Closed morph target is named ‘head_CTRLEyesClosed’

L. Itti, N. Dhavale and F. Pighin suggest in “Realistic Avatar Eye and Head Animation Using a Neurobiological Model of Visual Attention” (University of Southern California) using a blink duration of 150 milliseconds (Section 4.4), so I’m passing in a float value of 0.015 for the Blink Duration.

I then begin the blink cycle by calling StartBlink after a short delay from the event graph’s Initialize Animation event.

**EndBlink **is very simple. It simply sets the morph target value back to 0.0.
It wouldn’t be hard to interpolate the morph values in these two methods, but so far it hasn’t seemed necessary.

Now that our StartBlink has been called once explicitly, and our LastBlinkTime has been set, we can implement a **ShouldBlink **method to check on each update whether it’s time to blink again.

ShouldBlink is defined as a pure method and returns a boolean which we use to trigger the blink.

Its implementation uses a simplification of the heuristic defined in Itti, Dhavale and Pighin’s paper (Section 4.4):


There’s plenty of room to add more to this. The current heuristic, crude by the authors’ own admission, doesn’t take into account normal vs. attentive states or recent eye movement actions, but it’s a good foundation, and the encapsulated ShouldBlink method gives us an easy place to add that additional detail when we want it.

Thanks for sharing that, it looks to be quite useful!

Yeah, awesome share.

For some reason its really hard to find again via googling but its a real find.

Looking forward to implementing.

@KevODoom Im trying to get this working but have a few problems. First there's a crash doing it by function library so instead i copied it into the animation blueprint itself. Its doing something but no matter what i try i can't get the angles correct.

I got it really close usingthis way instead but again angles are slightly off which i think it caused by the world location being the character location instead of the bone itself. Im not sure what to do at this point. I’ve tried every combination of xyz and tweaks i can think of.

What is the blueprint node that has two vector inputs, one float output and a little gray square in the middle with no title…???

Oh wow - this is a blast from the past revisiting this post :slight_smile:
The node you’re looking at is a vector dot product node. It returns the cosine of the angle between two vectors.

Now you’re making me want to update this project :slight_smile:

Please do!