Eye Rigging/Animation

Hi,
I would like to ask you what is a good way to rig and animate eyes for game characters? I can’t seem to find any up-to-date tutorials that could explain how eyes in latest games work. I can see some mobile examples where eyes are drawn as texture, or they are basic for games that don’t have full facial expressions. I would like to make a character that could have an in-engine eye rig including pupil dilation. I understand that this is a broad subject, therefore I will ask a few sub-questions:

  1. Should eyes be a separate mesh? Should that mesh be linked within 3D application or via socket function in UE4?
  2. How do people animate the pupil? Is it a texture animation, or are vertices moving to stretch the pupil?
  3. How does the blinking work? Should eye lids have their own bones? Last time I animated birds in UE4 there was no separate vertex based animations, can I make a few morph targets now(completely open/normal/halfway/closed + other for facial expressions like suspicious/angry/happy etc.)?

I am including my simple bear character to give you an idea what I am working with. If you spot mistakes that would make facial animation difficult please let me know. Thank you for your information and help.

Regargin mobile games I saw that most of the time eyes are textures, so the blink is an animation sequence ( eye open, eye mid closed, eye closed, eye mid open, eye open again ) rather then being controlled by joints or morph target.

  1. Doesn’t really matter, because at the end of the day it should be skinned to the skeletal hierarchy in order to rotate around and move accordingly to the head movement.
  2. Texture animation will do the trick and it’s quicker to do…you can animate the pupil if you want with morphs, but because it’s rather small I suggest using textures.
    3.Tech is always evolving and the specifications for mobile games right now are different, so I will say that you can choose between using joints for eye movement rathern then morph targets ( more expensive then joint driven animation ). I use morph targets everywhere ( especially for facial animation ) and they work pretty well and performance are always good, so it’s up to you really which way you want to use to animate facial expressions/blink and so on

Thank you.

  1. I guess I am going a texture-based animation route for the pupil. Follow up question: how should I prepare the UV for the character model then? Should I separate eyes from the rest of the character for the texture purpose? I assume it would be inefficient to have only a pupil animation on the 2k or 4k character UV. Perhaps it is more efficient to have a single morph target for pupil dilation rather playing around with animation based textures. I can easily loop the line where pupil starts. I might be overthinking this, but it would be nice to make it right the first time.

  2. I believe morph targets are for me then: both faster and easier. I can just force vertices into positions in Zbrush rather than playing around with skinning eyelids.

  1. You can keep the eyes with the body and lay down the UVs normally, but you should prepare ( in photoshop or whatever ) a mask for the eyes area, so that in the material editor you can assign the eye blink texture using that mask.
    Hint: For both the pupil dilation and the eye blink I would use a Lerp node, where the A input is the eye open, and B is the eye closed. By using a scalar parameter you can then control the blink by exposing the scalar parameter value in Persona or in a Blueprint.
    Here is a sample of what I’m talking about

  2. Yep, 99% of the facial rigs I do are based on morph targets so go for it :slight_smile:

I generally do morphs for pupil dilation. The whole thing can even be scripted up and made procedurally, tho it doesnt take long to make such morphs by hand. Then you can either animate them along with your facial animations or have them be connected to a brightness sensor behind the eyelid (not sure if that works, just an Idea).

For the eyelids Id generally use joints, remember morphs do a linear blend between target poses, so you might have wired intersections along the way. Joints can just be rotated and you then get a spherical motion for free. Yeah you have to deal with skinning… but you had to do that anyway at some point, and you will have to do it again in the future, so better getting used to it than trying to avoid it. Although depending on budget and required character and rig complexity, it might actually be easiest to just do full face morphs for the basic expressions and call it a day, but I dont know much about your project so its up to you to decide.

I have made a quick test to see if I can get away with a single morph map for blinking animations: https://youtu.be/35iJr3bLt2g (must have smoothed one model so there is a little vibration across an entire model)

I was thinking of making something like progressive morph in 3Ds Max (it uses in-between morph targets to create a better and more fluid animation, as opposed to linear blend). I don’t even know if UE4 supports that, last time I tried (4.6), I couldn’t set it up like it worked in 3Ds Max.

But in the end, I will probably end up with a single morph map (for each eye) for blinking. Let me know if I could significantly improve the quality of that animation using different methods. I will try setting up pupil animation with that Lerp Node/Curve, if it fails then a morph target for the pupil.

I have another question about lower jaw. Obviously I need it for variety of animations: eating, yawning, speaking and roaring etc. Would one bone pivoting in the right position be enough for the jaw animations? Should I do morph target/targets for the the jaw instead?

A morph for each blink will be enough, also because you’re not going full realistic, so that shown in the video is good enough.

Regarding the jaw I usually create one joint, quickly do the skinning and then extract the full open/jaw left/jaw right morphs from it, but for everything esle ( lipsync, lips movement, phonemes and so on ) you need proper joints setup or morph targets…with one single joint which allows the mouth to open/close you can’t do everything :wink:

Could you tell me how to set up jaw in that case? I have looked online and most mouth rigs seems to be one joint for open/close and around 6/8 control points for the lips/tension.

I guess I need to do something similar in that case. One joint for opening/closing the jaw and 6/8 points for raising and stretching lips. Is this kind of mouth rig efficient for games?

I am attaching a sketch of how I imagine those 6 control points were set up with some red weights. 09016eda7448dfbe886f4bda28106bf63b6f2e26.jpeg

Yep, 1 joint for the jaw and additional controls for ther lips…based on your picture I would also add 2 on the center ( up and down ) lips and don’t forget you need to setup the lip roll.

From what I’ve seen nowdays most facial rigs are joint based with additional correctives on top of it, just to enhance/adjust some poses, but it also depends what your character needs to do.
Games like The Last of Us bet 100% on emotional response based on character expressions so I strongly suggest to check if a fully featured rig on the Bear dude is going to be used quite a bit or just for 2 cutscene.

I definitely need this character be be capable of blinking and making some expressions: angry/sad/bored,happy etc. I don’t have power, skills or time to get the quality similar to Kung Fu Panda, but if I get between Mario and Ratchet and Clank that would be good enough. It should be a playable character and platform is PC.

There are some lip loops between it and teeth, I will add another if the current ones aren’t enough. With MODO is it not a problem to adjust and modify topology with UV/rigging in place.

If it’s a one off character I would rig it up as a ingle mesh. As a separate mesh you can/could collect ready to use eyes that you can snap to a socket or bone.

My preference is to do all animations within the same workflow channel so bones/joints.

In my experience I found it easier in the long run to work with in the same work flow pathway and not mixed up different ways and means as it usually lands up being all about how you plan to author your animation needs and not so much assuming what would be the best choice by default.

Since I use MotionBuilder I do everything using bones and facial clusters that for me makes it easier to drive the necessary shaping with out having to add shapes as a target but as an additive using blend per bone so blinks could be done as a clip or as a procedural event.

as well there is this.

On my project I decided to use bones weighted to the eyes only to control the eyes, I wasn’t sure if it was a good idea at first but so far so good, it’s actually working really well. In the end I think its upto you to create your own work flow becuase as I far as I know there no single way people do it. Good luck :wink: