Facial tracking

Hello,

I’m a newbie and I want to figure out how can I implement facial thacking with Unreal Engine.
Are there any plugins for UE like faceshift or faceplus?
I have my kinect (first version for xbox 360) and I want to track the face in real-time and control a 3d model of a head in UE. Is it possible?
I don’t want to record facial animation and then apply it to a 3d model. I want to apply tracking data from my kinect to a 3d model in real-time.

Help me please.

https://unreal.facefx.com/

This is not facial tracking though, facial animation plug-in

We’re at least 2 years for that to be a reality.
Realtime facial tracking currently lacks proper cameras ( no 60fps to be seen around so goodbye proper lipsync, unless you use a GoPro with a proper usb/hdmi converter ) and technology, even if Faceware is about to release the tech for everyone to use.

Simply put you might want to record an animation and then import into UE4, because direct input ( without any proper tweaking/smoothing and so on ) will look like ****.

Also, facial movement are way too subtle and a 2d image wont capture all the expressions…when 4D scan will be processed in realtime then we can talk about something incredible :wink:

Thank you for the link. But the demo of this plugin on youtube is quite useless… Is there anyone who understood what was going on in this video?

It is very strange that today there is only one NOT FREE BETA plugin for UE4.
There are at least 3 plugins for Unity3d. And they are all free and much more advanced.

If you want achieve a good quality use depth cameras.
I said that I don’t want to record animation. I just want to transmitt tracking data from my kinect to UE and apply it to a 3d model in real-time. Direct input looks very nice in case of plugins for Unity3d. I recommend you to try microsoft kinect sdk.

It seems that there is no reason to move from Unity to UE4 just only to get best graphics.

I understood what you wrote and I had experience with both Faceshift and other facial tracking softwares…
Currently, if you’re familiar with C++, I guess that since Faceshift is capable of streaming data via a network you can stream data to UE4 and connect the various blend shapes to a rig inside UE4, but until now no one did it.

Well, you asked for facial tracking, now if you need to move from Unity to UE4 just for that I suggest to give it a couple of months for the plugins to be available and check the results by yourself.

Thank you for your reply. I hope that some plugins for UE4 are comming soon, but no one knows how long it might take. I’m a newbie in programming and I have no idea how to stream data from faceshift to UE4 and connect the various blend shapes:D
Sorry for OT, but what plugins have you used?

Faceshift for Maya, Faceware ( I currently own a license ) and 2 others that are currently under NDA.
Well, if you need facial animation in UE4 let me know, I’m quite familiar with the pipeline :wink:

Wow, I didn’t know about FaceWare. It looks great on youtube:eek: Much more accurate than FaceShift with kinect! Thank you for the information.
Currently I’m trying to calibrate default model in FaceShift, but live animation is still very noisy:( I don’t know what the heck is going on there. Any filter that I’m trying to apply is useless. It’s very strange that there is no smoothing in postprocessing. It seems that I bought my kinect in vain!

Also I have tested not supported code example in c# that only works fine with kinect and unity. https://www.assetstore.unity3d.com/en/#!/content/4692
Animation is very smooth and it works fine. But there is no description how to replace default model by another. No comments for code. No support. #$!% :mad:
Maybe it will interest you. I have source files.

I would gladly like to make live animation in UE4 with my kinect and faceshift and apply some smoothness to it, but I think it is big deal.

I guess that, using coding or Playmaker, you can get the values of the blend shapes of the default model and apply those to your character…it would be a kind of 1:1 link between them and I guess that is not that difficult to do.

I think that kinect will be soon abandoned by most developers, since facial tracking algorithm is being widely used and the technique is kinda been shared, so it’ll be a couple of months ( or a year probably ) before some good plugins will be released…but the plugin is the last thing I’m worried about…the main concern is about the hardware itself, which is going to be bloody expensive…and for realtime facial tracking? currently is useless, ARK survival did a kind of lipsync system based on the audio if I remember correctly, but overall, unless you’re creating a specific game/application which will focuse just on the face, the realtime facial tracking is useless…cool, but completely useless and not precise/smooth as you want.
Again, 4D is the way to go in the future, but you’ll need a monster of a hardware and the tech is not yet optimized to be realtime ( far from it actually ).
All the other plugins I saw around are quite average and not really usefull.

On the one hand you say that kinect will be soon abandoned by most developers, on the other hand you say that the main concern is about the hardware itself, which is going to be bloody expensive.
Everyone wants to buy a Ferrari for $300 and share it with friends.
Why kinect should be abandoned if it’s price is acceptable and it gives much more precise data like depth map than usual web cam? Only except for the frame rate in case of gopro and other high fps cameras which still need to be worked.

However the specs is very impressive and frame rate is very suitable for such specific task.

Kinect xbox 360
Field of View: 57.5˚ horizontal by 43.5˚ vertical
Resolvable Depth: 0.8 m -> 4.0 m
Colour Stream: 640 x 480 x 24 bpp 4:3 RGB @ 30fps640 x 480 x 16 bpp 4:3 YUV @ 15fps
Depth Stream: 320 x 240 16 bpp, 13-bit depth
Infrared Stream: No IR stream
Registration: Color <-> depth
Audio Capture: 4-mic array returning 48 Hz audio
Data Path: USB 2.0
Latency: ~90 ms with processing

Kinect xbox one
Field of View: 70˚ horizontal by 60˚ vertical
Resolvable Depth: 0.8 m -> 4.0 m
Colour Stream: 1920 x 1080 x 16 bpp 16:9 YUY2 @ 30 fps
Depth Stream: 512 x 424 x 16 bpp, 13-bit depth
Infrared Stream: 512 x 424, 11-bit dynamic range
Registration: Colour <-> depth and active IR
Audio Capture: 4-mic array returning 48K Hz audio
Data Path: USB 3.0
Latency: ~60 ms with processing !!!

Now I’m trying to understand how to get data from kinect sdk and apply it to 3d model using existing code examples, but it is quite difficult and I think that Playmaker would not be so helpful in this case.

I was talking about the 4D hardware, not the kinect.
Anyway Intel released a new depth camera which you can find here.

I mentioned Playmaker because you said that you would want to use the plugin you linked but you couldn’t replace the default character. If you can “link” using playmaker or good old coding you’re able to transfer 1:1 the movement from one actor ( the default character from the plugin ) to the other one ( your custom character ) and you can have realtime facial animation inside Unity. It’s a trick, not a solution, if you want to do custom solution you have to dig into the SDK.

PS: both Kinect and Kinect2 have 30fps, which is terrible for realtime facial animation. A good solution is a camera which can output 60-120fps, because it capture all the small movement on the face…
Yes, kinect has the depth maps but its still at 30fps, thats the main problem.

Ok, intel real sense has good opportunity to deal with facial animation. May be I’ll buy it when it goes on sale. I do not need extremely precise facial animation with 120fps, kinect for me is fine.

In the case of not supported source file that I have found the default .fbx model contains 8 pre-rigged facial expressions at once.

http://s3.postimg.org/kq9p10eyb/default_model_2.jpg

It is not a big deal to make a new 3d model whatever I want and place several expressions to one .fbx file.
The main problem for me is to add a new facial expressions in the code and control other stuff like head orientation, eyes traking and so on.

AnimUnits.cs


using System;
using UnityEngine;



public struct AnimationUnits
{
    public const int MaxNbAnimUnits = 6;

    public Vector3 Au012;
    public Vector3 Au345;

    public AnimationUnits(float au0, float au1, float au2, float au3, float au4, float au5)
    {
        Au012 = new Vector3(au0, au1, au2);
        Au345 = new Vector3(au3, au4, au5);
    }

    public float LipRaiser
    {
        get { return Au012[0]; }
        set { Au012[0] = value; }
    }

    public float JawLowerer
    {
        get { return Au012[1]; }
        set { Au012[1] = value; }
    }

    public float LipStretcher
    {
        get { return Au012[2]; }
        set { Au012[2] = value; }
    }

    public float BrowLowerer
    {
        get { return Au345[0]; }
        set { Au345[0] = value; }
    }

    public float LipCornerDepressor
    {
        get { return Au345[1]; }
        set { Au345[1] = value; }
    }

    public float OuterBrowRaiser
    {
        get { return Au345[2]; }
        set { Au345[2] = value; }
    }

    public float this[int i]
    {
        get
        {
            if (i < 0 || i > MaxNbAnimUnits)
                throw new ArgumentOutOfRangeException("There is only " + MaxNbAnimUnits + " animation units but you requested the nb: " + i);
            if (i < 3)
            {
                return Au012*;
            }
            return Au345[i - 3];
        }

        set
        {
            if (i < 0 || i > MaxNbAnimUnits)
                throw new ArgumentOutOfRangeException("There is only " + MaxNbAnimUnits + " animation units but you requested the nb: " + i);
            if (i < 3)
            {
                Au012* = value;
                return;
            }
            Au345[i - 3] = value;
        }
    }

    public static AnimationUnits operator +(AnimationUnits first, AnimationUnits second)
    {
        var animUnits = new AnimationUnits();
        animUnits.Au012 = first.Au012 + second.Au012;
        animUnits.Au345 = first.Au345 + second.Au345;
        return animUnits;
    }
}

FaceTrackingExample.cs


using UnityEngine;


public class FaceTrackingExample : MonoBehaviour
{
    public float FaceTrackingTimeout = 0.1f;
    public float TimeToReturnToDefault = 0.5f;
    public float SmoothTime = 0.1f;
    public KinectBinder Kinect;
    public Transform Model;
    public PoseAnimator ModelAnimator;

    public GameObject Flames;

    private Vector3 _position;
    private Vector3 _rotation;
    private Vector3 _smoothedRotation;
    private Vector3 _currentPosVelocity;
    private Vector3 _currentRotVelocity;

    private AnimationUnits _animUnits, _targetAnimUnits;
    private AnimationUnits _currentAuVelocity;

    private bool _isInitialized;
    private bool _hasNewData;
    private float _waitTimer;
    private float _timeOfLastFrame;
    private float _gravityStartTime;
    private AnimationUnits _startGravityAu;
    private Vector3 _startGravityPos;
    private Vector3 _startGravityRot;

    private AnimationUnits _accAu;
    private float] _morphCoefs = new float[7];

    private Vector3 _userInitialPosition;

    public enum TrackingMode
    {
        UserFace,
        Gravity,
        ComputerControlled,
    }

    public TrackingMode CurrentMode { get; set; }


    void Start ()
    {
        _position = _userInitialPosition;
        _rotation = Model.transform.rotation.eulerAngles;
        Kinect.FaceTrackingDataReceived += ProcessFaceTrackingData;
    }
    

    void ProcessFaceTrackingData(float au0, float au1, float au2, float au3, float au4, float au5, float posX, float posY, float posZ, float rotX, float rotY, float rotZ)
    {
        _hasNewData = true;
        var animUnits = new AnimationUnits(au0, au1, au2, au3, au4, au5);
        _position = new Vector3(posX, posY, posZ);
        _rotation = new Vector3(rotX, rotY, rotZ);
        
        // We amplify the position to exagerate the head movements.
        _position *= 10;
        SetCurrentAUs(animUnits);
    }


    private void SetCurrentAUs(AnimationUnits animUnits)
    {
        const float weight = 0.8f;
        for (int i = 0; i < 6; i++)
        {
            _accAu* = animUnits* * weight + _accAu* * (1 - weight);
        }

        animUnits = _accAu;

        _targetAnimUnits.LipRaiser = MapLipRaiserValue(animUnits.LipRaiser);
        _targetAnimUnits.JawLowerer = MapJawLowererValue(animUnits.JawLowerer);
        _targetAnimUnits.LipStretcher = MapLipStretcherValue(animUnits.LipStretcher);
        _targetAnimUnits.BrowLowerer = MapBrowLowererValue(animUnits);
        _targetAnimUnits.LipCornerDepressor = MapLipCornerDepressorValue(animUnits.LipCornerDepressor);
        _targetAnimUnits.OuterBrowRaiser = MapOuterBrowRaiserValue(animUnits.OuterBrowRaiser);
    }


    #region Au Range Calibration
    /**
     * Notes on the Animation Units remapping:
     * In this part we simply that the raw data from the kinect and filter/re-mapped it.
     * In some cases we amplify it, we in others we use a bit of logic to manipulate the data.
     * 
     * The magic numbers only reflect what we found worked well for the experience we wanted to setup.
     */
    // AU0
    private float MapLipRaiserValue(float coef)
    {
        return coef;
    }

    // AU1
    private float MapJawLowererValue(float coef)
    {
        return coef;
    }

    // AU2
    private float MapLipStretcherValue(float coef)
    {
        return coef;
    }

    // AU3
    private float MapBrowLowererValue(AnimationUnits animUnits)
    {
        if (animUnits.OuterBrowRaiser > 0f)
            return Mathf.Clamp(animUnits.BrowLowerer - animUnits.OuterBrowRaiser, -1f, 1.7f);

        return Mathf.Clamp(animUnits.BrowLowerer - 3 * animUnits.OuterBrowRaiser, -1f, 1.7f);
    }

    // AU4
    private float MapLipCornerDepressorValue(float coef)
    {
        return 2 * coef;
    }

    // AU5
    private float MapOuterBrowRaiserValue(float coef)
    {
        return Mathf.Clamp(coef * 2, -1f, 1f);
    }
    #endregion


    void Update()
    {
        if (_hasNewData)
        {
            _hasNewData = false;
            _timeOfLastFrame = Time.time;
            ProcessFaceData();
        }
        else
        {
            float timeSinceLastFrame = Time.time - _timeOfLastFrame;
            if (timeSinceLastFrame > TimeToReturnToDefault + FaceTrackingTimeout)
            {
                ManipulateMaskAutomatically();
            }
            else if (timeSinceLastFrame > FaceTrackingTimeout)
            {
                ReturnMaskToNeutralState();
            }
        }

        UpdateTransform();
        UpdateAUs();

        CheckForSpecialPoses();
    }


    private void ProcessFaceData()
    {
        if (!_isInitialized)
        {
            _isInitialized = true;
            InitializeUserData();
        }

        CurrentMode = TrackingMode.UserFace;
    }


    private void InitializeUserData()
    {
        _userInitialPosition = _position;
    }


    // Perform random faces automatically to fill the gaps whenever we do not have any input data from kinect.
    private void ManipulateMaskAutomatically()
    {
        CurrentMode = TrackingMode.ComputerControlled;
        _waitTimer -= Time.deltaTime;
        if (_waitTimer > 0)
            return;

        _waitTimer = 3f;

        _rotation = new Vector3(Random.Range(-10f, 10f), Random.Range(-10f, 10f), Random.Range(-10f, 10f));
        _targetAnimUnits = new AnimationUnits(Random.Range(-1f, 1f), Random.Range(-1f, 1f), Random.Range(-1f, 1f),
                                          Random.Range(-1f, 1f), Random.Range(-1f, 1f), Random.Range(-1f, 1f));
    }


    private void ReturnMaskToNeutralState()
    {
        if (CurrentMode != TrackingMode.Gravity)
        {
            InitializeGravity();
            CurrentMode = TrackingMode.Gravity;
        }

        ApplyGravityToParams();
    }


    // By gravity we mean the force that will pull the mask back to its neutral state.
    private void InitializeGravity()
    {
        _isInitialized = false;
        _gravityStartTime = Time.time;
        _startGravityAu = _animUnits;
        _startGravityPos = _position;
        _startGravityRot = _rotation;
    }


    private void ApplyGravityToParams()
    {
        float time = Mathf.Clamp01((Time.time - _gravityStartTime) / TimeToReturnToDefault);
        _position = Vector3.Lerp(_startGravityPos, _userInitialPosition, time);
        _rotation = Vector3.Lerp(_startGravityRot, new Vector3(0, 0, 0), time);

        var animUnits = new AnimationUnits();
        animUnits.Au012 = Vector3.Lerp(_startGravityAu.Au012, Vector3.zero, time);
        animUnits.Au345 = Vector3.Lerp(_startGravityAu.Au345, Vector3.zero, time);
        SetCurrentAUs(animUnits);
    }


    private void UpdateTransform()
    {
        // Apply some smoothing to both the position and rotation. The raw input data is quite noisy.
        _smoothedRotation = Vector3.SmoothDamp(_smoothedRotation, _rotation, ref _currentRotVelocity, SmoothTime);
        Model.rotation = Quaternion.Euler(_smoothedRotation);
        Model.position = Vector3.SmoothDamp(Model.position, _position - _userInitialPosition, ref _currentPosVelocity, SmoothTime);
    }


    private void UpdateAUs()
    {
        // Smooth the animation units as the data received directly by the kinect is noisy.
        _animUnits.Au012 = Vector3.SmoothDamp(_animUnits.Au012, _targetAnimUnits.Au012, ref _currentAuVelocity.Au012, SmoothTime);
        _animUnits.Au345 = Vector3.SmoothDamp(_animUnits.Au345, _targetAnimUnits.Au345, ref _currentAuVelocity.Au345, SmoothTime);

        UpdateLipRaiser(_animUnits.LipRaiser);
        UpdateJawLowerer(_animUnits.JawLowerer);
        UpdateLipStretcher(_animUnits.LipStretcher);
        UpdateBrowLowerer(_animUnits.BrowLowerer);
        UpdateLipCornerDepressor(_animUnits.LipCornerDepressor);
        UpdateOuterBrowRaiser(_animUnits.OuterBrowRaiser);
    }

    #region Specific AU Updates
    /**
     * Note on animating the mask:
     * In this example we use a pose animation technology to animate the mask.
     * You could just as easily use unity animation system and control the animation timeline yourself,
     * or control the animations by direct manipulation of the bones. Whatever suits you best.
     * 
     * In general, we have one or two specific pose (animation) per Animation Unit (AU).
     * This setup was simply motived by its ease of use.
     * 
     * Finally you will notice that sometimes we do not use the full range or that we use some magic numbers.
     * These were hand tweaked so that we could better exagerate certain expressions. In that case we decided
     * to go benefit the user experience instead of plain data accuracy.
     */
    private void UpdateLipRaiser(float coef)
    {
        _morphCoefs[0] = coef;
        ModelAnimator.SetWeight(0, coef);
    }

    private void UpdateJawLowerer(float coef)
    {
        // The jaw lowerer animation unit has no negative range.
        _morphCoefs[1] = Mathf.Max(0, coef);
        ModelAnimator.SetWeight(1, Mathf.Max(0, coef));
    }

    private void UpdateLipStretcher(float coef)
    {
        // The lip stretcher animation has 2 animations simply because it was easier to design that way.
        // One represents the Animation Unit range -1, 0] and the other is for [0, 1].
        _morphCoefs[2] = Mathf.Clamp(-1.5f * coef, 0, 1.5f);
        _morphCoefs[3] = Mathf.Clamp(coef, -0.7f, 1);
        ModelAnimator.SetWeight(2, Mathf.Clamp(-1.5f * coef, 0, 1.5f));
        ModelAnimator.SetWeight(3, Mathf.Clamp(coef, -0.7f, 1));
    }

    private void UpdateBrowLowerer(float coef)
    {
        _morphCoefs[4] = coef;
        ModelAnimator.SetWeight(4, coef);
    }

    private void UpdateLipCornerDepressor(float coef)
    {
        _morphCoefs[5] = Mathf.Clamp(coef, -0.15f, 1);
        ModelAnimator.SetWeight(5, Mathf.Clamp(coef, -0.15f, 1));
    }

    private void UpdateOuterBrowRaiser(float coef)
    {
        _morphCoefs[6] = coef;
        ModelAnimator.SetWeight(6, coef);
    }
    #endregion


    private void CheckForSpecialPoses()
    {
        if (IsHappy())
        {
            if (!Flames.activeSelf)
                Flames.SetActive(true);
        }
        else
        {
            if (Flames.activeSelf)
                Flames.SetActive(false);
        }
    }

    private bool IsHappy()
    {
        return _morphCoefs[1] > 0.6f && _morphCoefs[6] > 0.35f;
    }



}


KinectBinder.cs


using System;
using System.Diagnostics;
using DataConverter;
using UnityEngine;
using Debug = UnityEngine.Debug;

/// <summary>
/// The kinect binder creates the necessary setup for you to receive data for the kinect face tracking system directly.
/// 
/// Simply subscribe to the FaceTrackingDataReceived event to receive face tracking data.
/// Check FaceTrackingExample.cs for an example.
/// 
/// VideoFrameDataReceived and DepthFrameDataReceived events will give you the raw rbg/depth data from kinect cameras.
/// Check ImageFeedback.cs for an example.
/// </summary>
public class KinectBinder : MonoBehaviour
{
    public delegate void FaceTrackingDataDelegate(float au0, float au1, float au2, float au3, float au4, float au5, float posX, float posY, float posZ, float rotX, float rotY, float rotZ);
    public event FaceTrackingDataDelegate FaceTrackingDataReceived;

    public delegate void VideoFrameDataDelegate(Color32] pixels);
    public event VideoFrameDataDelegate VideoFrameDataReceived;

    public delegate void DepthFrameDataDelegate(short] pixels);
    public event DepthFrameDataDelegate DepthFrameDataReceived;

    public delegate void SkeletonDataDelegate(JointData] jointsData);
    public event SkeletonDataDelegate SkeletonDataReceived;

    private float _timeOfLastFrame;
    private int _frameNumber = -1;
    private int _processedFrame = -1;
    private Process _otherProcess;

    private int _kinectFps;
    private int _kinectLastFps;
    private float _kinectFpsTimer;
    private bool _hasNewVideoContent;
    private bool _hasNewDepthContent;
    private string _faceTrackingData;
	private string _skeletonData;
    private short] _depthBuffer;
    private Color32] _colorBuffer;
    private JointData] _jointsData;

    // Use this for initialization
    void Start()
    {
        BootProcess();
    }

    private void BootProcess()
    {
        const string dataTransmitterFilename = "KinectDataTransmitter.exe";
        string path = Application.dataPath + @"/../Kinect/";

        _otherProcess = new Process();
        _otherProcess.StartInfo.FileName = path + dataTransmitterFilename;
        _otherProcess.StartInfo.UseShellExecute = false;
        _otherProcess.StartInfo.CreateNoWindow = true;
        _otherProcess.StartInfo.RedirectStandardInput = true;
        _otherProcess.StartInfo.RedirectStandardOutput = true;
        _otherProcess.StartInfo.RedirectStandardError = true;
        _otherProcess.OutputDataReceived += (sender, args) => ParseReceivedData(args.Data);
        _otherProcess.ErrorDataReceived += (sender, args) => Debug.LogError(args.Data);

        try
        {
            _otherProcess.Start();
        }
        catch (Exception)
        {
            Debug.LogWarning(
                "Could not find the kinect data transmitter. Please read the readme.txt for the setup instructions.");
            _otherProcess = null;
            enabled = false;
            return;
        }
        _otherProcess.BeginOutputReadLine();
        _otherProcess.StandardInput.WriteLine("1"); // gets rid of the Byte-order mark in the pipe.
    }

    void ParseReceivedData(string data)
    {
        if (Converter.IsFaceTrackingData(data))
        {
            _faceTrackingData = data;
        }
		else if (Converter.IsSkeletonData(data))
		{
            _skeletonData = data;
        }
        else if (Converter.IsVideoFrameData(data))
        {
            _hasNewVideoContent = true;
        }
        else if (Converter.IsDepthFrameData(data))
        {
            _hasNewDepthContent = true;
        }
        else if (Converter.IsPing(data))
        {
            if (_otherProcess != null && !_otherProcess.HasExited)
            {
                _otherProcess.StandardInput.WriteLine(Converter.EncodePingData());
            }
        }
        else if (Converter.IsError(data))
        {
            Debug.LogError(Converter.GetDataContent(data));
        }
        else if (Converter.IsInformationMessage(data))
        {
            Debug.Log("Kinect (information message): " + Converter.GetDataContent(data));
        }
        else
        {
            Debug.LogWarning("Received this (unknown) message from kinect: " + data);
        }
    }

    void Update()
    {
        if (_otherProcess == null || _otherProcess.HasExited)
        {
            Debug.LogWarning("KinectDataTransmitter has exited. Trying to reboot the process...");
            BootProcess();
        }

        bool hasNewData = (_frameNumber > _processedFrame);

        if (hasNewData)
        {
            _kinectFps += _frameNumber - _processedFrame;
            _processedFrame = _frameNumber;
        }

        if (_hasNewVideoContent)
        {
            _hasNewVideoContent = false;
            ProcessVideoFrame(Converter.GetVideoStreamData());
        }

        if (_hasNewDepthContent)
        {
            _hasNewDepthContent = false;
            ProcessDepthFrame(Converter.GetDepthStreamData());
        }

        if (_faceTrackingData != null)
        {
            string data = _faceTrackingData;
            _faceTrackingData = null;
            ProcessFaceTrackingData(Converter.GetDataContent(data));
        }

        if (_skeletonData != null)
        {
            string data = _skeletonData;
            _skeletonData = null;
            ProcessSkeletonData(Converter.GetDataContent(data));
        }

        UpdateFrameCounter();
    }

    private void ProcessDepthFrame(byte] bytes)
    {
        if (DepthFrameDataReceived == null || bytes == null)
            return;

        if (_depthBuffer == null || _depthBuffer.Length != bytes.Length/2)
        {
            _depthBuffer = new short[bytes.Length / 2];
        }
        for (int i = 0; i < _depthBuffer.Length; i++)
        {
            int byteIndex = i * 2;
            _depthBuffer* = BitConverter.ToInt16(bytes, byteIndex);
        }

        DepthFrameDataReceived(_depthBuffer);
    }

    private void ProcessVideoFrame(byte] bytes)
    {
        if (VideoFrameDataReceived == null || bytes == null)
            return;

        if (_colorBuffer == null || _colorBuffer.Length != bytes.Length / 4)
        {
            _colorBuffer = new Color32[bytes.Length / 4];
        }

        for (int i = 0; i < _colorBuffer.Length; i++)
        {
            int byteIndex = i*4;
            _colorBuffer* = new Color32(bytes[byteIndex+2], bytes[byteIndex+1], bytes[byteIndex], byte.MaxValue);
        }

        VideoFrameDataReceived(_colorBuffer);
    }

    private void ProcessFaceTrackingData(string data)
    {
        if (FaceTrackingDataReceived == null)
            return;

        _frameNumber++;
        float au0, au1, au2, au3, au4, au5, posX, posY, posZ, rotX, rotY, rotZ;
        Converter.DecodeFaceTrackingData(data, out au0, out au1, out au2, out au3, out au4, out au5, out posX,
                                         out posY, out posZ, out rotX, out rotY, out rotZ);

        FaceTrackingDataReceived(au0, au1, au2, au3, au4, au5, posX, posY, posZ, rotX, rotY, rotZ);
    }

    private void ProcessSkeletonData(string data)
    {
        if (SkeletonDataReceived == null)
            return;

        _frameNumber++;
        if (_jointsData == null)
        {
            _jointsData = new JointData(int)JointType.NumberOfJoints];
        }
        Converter.DecodeSkeletonData(data, _jointsData);
        SkeletonDataReceived(_jointsData);
    }

    private void UpdateFrameCounter()
    {
        _kinectFpsTimer -= Time.deltaTime;
        if (_kinectFpsTimer <= 0f)
        {
            _kinectLastFps = _kinectFps;
            _kinectFps = 0;
            _kinectFpsTimer = 1;
        }
    }

    void OnGUI()
    {
        if (Event.current.type != EventType.Repaint)
            return;

        GUI.color = Color.white;
        GUI.Label(new Rect(5, 5, 250, 30), "Kinect FPS: " + _kinectLastFps);
        if (_kinectLastFps == 0)
        {
            GUI.Label(new Rect(5, 25, 400, 30), "(Kinect is not tracking... please get in range.)");
        }

    }

    void OnApplicationQuit()
    {
        ShutdownKinect();
    }

    private void ShutdownKinect()
    {
        if (_otherProcess == null)
            return;

        try
        {
            Process.GetProcessById(_otherProcess.Id);
        }
        catch (ArgumentException)
        {
            // The other app might have been shut down externally already.
            return;
        }

        try
        {
            _otherProcess.CloseMainWindow();
            _otherProcess.Close();
        }
        catch (InvalidOperationException)
        {
            // The other app might have been shut down externally already.
        }
    }
}


I want to apply facial animation to the head of a whole 3d character.

In case of FaceShift I haven’t got any clue how to stream data to UE4.

There are only 8 expressions? is this a joke? 8 only?
Man, this is just sad…

Ehm, did you just realize that you are sharing code of something which you need to buy to have access?

Hire a programmer or start studying programming and then you can start messing with the SDK…or simply abandon for now the idea of realtime facial tracking, its useless.

There are 8 expressions by default. So I want to make more expressions. I just want to understand pipeline. It is totally free example. It isn’t sad. There is no one who can give a useful advice, it is sad.

You’re looking for the easy solution, aren’t you?

As said, no one did a plugin for Faceshift to work with UE4…its simple…there are plenty of examples of streaming content to UE4 and there are lots of examples of people streaming Faceshift to other softwares, so what you need to do is to find a way to combine them…the examples are free and available to everyone so there is your starting point…but I already understood what you’re looking for and I guess that you’re keep asking for advices.

Good luck

how can i make track face __like this–iam self education–https://www.youtube.com/watch?v=2dTvMR7TVck
thank you