Indie dev researching snappers facial motion capture and optitrack. Hitting a wall. (update: found the info I needed..faceware)

Update: I think found the answer but this is a lot to read, could be useful if you are looking to make high quality facial animations like in Injustice 2. If anyone can help me find where the “scientific research” part of the facial rig that this article discusses all of the bones and morphs I’m missing is that would be a great help… Link: The tech behind Injustice 2's crazy facial animation - CNET

Original Post continued:

I am an indie dev with about $30,000 to spend on equipment (if required) and working on a VR game but will have desktop support. From their websites it appears some of these companies with markerless facial tracking like “snappers” might only work with large AAA studios because I could never get a response back. If you don’t know what snappers tech is here is a link:

Anyway, my goal here is to try to make the characters our game as realistic (but beyond the uncanney valley) as possible. We don’t want to see what happened in Mass effect Andromeda with the facial animations of course and looking for something more along the lines of “Injustice 2” are in the research phase for this right now.

I have already utilized Agisoft Photoscan with large camera array to scan in a model, with cleanup in z-brush afterwards with excellent results. For research purposes a friend of mine took a look at the Harley Quinn face meshes from “Injustice 2” (research purposes only and deleted them shortly after not even sure how he got them) He studied them to see how the face was setup on her character and it looks to have about 80 bones in the face alone each effecting only a small area of the mesh at a time. (Netherrealm said this was based on scientific research, which we could never find this research or siggraph)

It appears they spent a lot more time on Harley Quinn’s model’s face than the other models even though the others were also decent. Here is a link to what we would like to achieve using a markerless capture system of some sort https://kotaku.com/injustice-2s-faci…nny-1795259070

I had read an article with some info on a cnet, it was an interview with netherrealm that they did utilize photogrammetry and markerless facial tracking but had made no mention of using “snappers” tech, even though it shows they did in fact use it on the snappers website. On the picture you can see they use an optitrack system (which we are looking into also) for the body motion capture. There are some sort of cameras they are using to capture the faces. Here is the cnet article to that interview: https://www.cnet.com/news/how-injust…ng-game-genre/

What I am getting at here is I have no idea where to go from here because everything feels “locked away” and secretive and I have to dig through websites and analyze small thumbnails and have friends analyze AAA game’s models to get anywhere.

How do you feed markerless facial data from a camera into a facial rig that has 80 bones? Where can that hardware or software be purchased? Is snappers using some sort of hardware to do this that they sell you or is it instead a service and set up the facial rigs for you? Does snappers basically tell you all the hardware to buy and setup your facial rigs for you? Where is the scientific research on facial rigs that netherrealm was talking about?

I think if this information was out there for smaller studios or indie devs to find it would be beneficial to everyone and make a huge impact.

I might be getting somewhere. I applied for a faceware trial and harley quinn’s image popped up on their site after I applied for the trial. So I am guessing that snappers might be using http://facewaretech.com If this works out with 60fps cam would be great. I would just need to find where this “scientific research” on the facial rig would be that netherrealm was talking about. I hope this post will be useful to other devs wanting to make facial animations as good as Injustice 2, etc. If anyone else could help me put the puzzles peices together that would be great.

Edit: Here a video of star citizen who implemented faceware for realtime capture of players faces recently Star Citizen: Faceware Announcement - YouTube
I think the only difference here is perhaps that harley quinn has over 80 bones so her face move a lot more naturally than the star citizen models but I still can’t be sure…

Edit2: Found more info http://facewaretech.com/faceware-bri…n-injustice-2/ but still looking into how to setup those bones in the face and waiting for faceware trial to see how you would possibly map to 80 bones.

Edit3: They have a tutorial on faceware here Faceware Retargeter 5.0: Retargeting Workflow with Poses - YouTube but that it sort of looking a little like mass effect andromeda in that video in my opinion (little concerning). It seems that there is definitely and “artistic” aspect to all of this to achieve results similar to Harley Quinn’s facial animiation. Not sure if it’s the sheer amount of bones or the poses in the faceware retargeter program that need to be mapped. Here is another video, I am actually only really impressed by the first few games and can’t figure out what is making some of them look good while others look pretty bad - YouTube. Does anyone know where this scientific research netherrealm used to make their face bones is?

I can see that Netherrealm had done some updates at one point because their original facial animations looked pretty terrible in the alpha INJUSTICE 2 Graphics Overhaul! New Faces, Lighting & Breakdown - YouTube

While I can only speculate from what i’ve seen, snapper’s rig could be your everyday facial tracking rig with some custom setup over it.

this can be accessible through faceware and other such devices as long as you have the tracking points lined up you could even use PS3 cam to do such a tracking setup with your live footage tracking software. what makes snapper’s faces look the way they do is the very wide range of scanned and then tweaked morph targets plus changing normal maps which get triggered based on these points. and each point acts as a joystick with multiple states, its pretty much your facial control sliders but only triggered by a point tracked in space. Its the quality of the morph targets that make it happen. A bone rig on its own cannot produce the quality of facial animation that would match that, it has to be a combination of morph targets and bone or morphs only.

80 bones is not too many but should be ok i guess for most games as a starting point. In one of our initial tests our bone rig for the face ended up with around 200 bones and we still couldn’t get the subtle motions we needed, in the end we decided working with morphs all the way, UE loads tons of morphs just fine without really losing performance, the pipeline is crude though as adding and subtracting morph targets in UE is a pain most of the time.

Well to get some hands on experience Daz Studio and Genesis from Daz3D might be a good starting point. I’ve been using Genesis with MotionBuilder for a few years testing for a low cost facial animation solution and as a test platform is very helpful at getting into the nuts and bolts of character design as Genesis 3 and 8 have support for both cluster and morph rigging and works well, since 4.16, with Unreal 4. What I see as being the hard part involves having the necessary live actors to play the parts and do the dialogue tracks as well as the necessary material work to get the level of quality as equal in the video examples. The DS and Genesis solution might not be what you are looking for but should help in figuring out the right questions to ask as to wiring up the kit to make it all work and MotionBuilders voice device, audio capture, should give you a better idea as to what solutions you could shoot for before spending a large sum of dollars.

At the very least it’s worth a look see as Daz Studio along with the Genesis framework is free and your start up costs would be free and some imagination. :wink:

The actual rigging part is not that difficult as in it’s only hard until it becomes easy and in the case of the voice device only requires a device driver with a relationship constraint that ties the hardware to the software using a basic naming convention structure.

I have been researching this too for a while now because its the least area i know about, It seems just bones won’t get you far regarding realistic deformation so you will def need a few morphs on top to correct various poses which linear skinning can’t get right, in terms of facial performance capture i believe faceware does a pretty good job if you carefully take care of the retarget and cleanup, then again if you look at like Uncharted 4, most of their facial performances are heavily hand keyed with a simple face mocap as base foundation.

So the artistic touch can indeed influence the end result drastically.

Okay so I spoke with snappers and they indeed do the rig and can make it work in conjuction with faceware retargeter. The cost is $8000 per model and you have to send them the 16 facial scans they request from the agisoft photoscan scans. I am very tempted here but they have no actual demos to send me so I can test this with the faceware demo.

I had some concerns because some of the faceware demos on the faceware site look close to mass effect andromeda. I am thinking snappers is what is going to bring this to the “harley quinn” level but it’s a lot to gamble on a hunch and a single snappers tech video. Hope this helps someone wanting to make AAA looking facial animations, but I also think snappers shouild provide some sort of downloadable that is mapped to faceware to demo this. Even if it’s an .exe file just to see how it will work.