Update: I think found the answer but this is a lot to read, could be useful if you are looking to make high quality facial animations like in Injustice 2. If anyone can help me find where the “scientific research” part of the facial rig that this article discusses all of the bones and morphs I’m missing is that would be a great help… Link: The tech behind Injustice 2's crazy facial animation - CNET
Original Post continued:
I am an indie dev with about $30,000 to spend on equipment (if required) and working on a VR game but will have desktop support. From their websites it appears some of these companies with markerless facial tracking like “snappers” might only work with large AAA studios because I could never get a response back. If you don’t know what snappers tech is here is a link:
Anyway, my goal here is to try to make the characters our game as realistic (but beyond the uncanney valley) as possible. We don’t want to see what happened in Mass effect Andromeda with the facial animations of course and looking for something more along the lines of “Injustice 2” are in the research phase for this right now.
I have already utilized Agisoft Photoscan with large camera array to scan in a model, with cleanup in z-brush afterwards with excellent results. For research purposes a friend of mine took a look at the Harley Quinn face meshes from “Injustice 2” (research purposes only and deleted them shortly after not even sure how he got them) He studied them to see how the face was setup on her character and it looks to have about 80 bones in the face alone each effecting only a small area of the mesh at a time. (Netherrealm said this was based on scientific research, which we could never find this research or siggraph)
It appears they spent a lot more time on Harley Quinn’s model’s face than the other models even though the others were also decent. Here is a link to what we would like to achieve using a markerless capture system of some sort https://kotaku.com/injustice-2s-faci…nny-1795259070
I had read an article with some info on a cnet, it was an interview with netherrealm that they did utilize photogrammetry and markerless facial tracking but had made no mention of using “snappers” tech, even though it shows they did in fact use it on the snappers website. On the picture you can see they use an optitrack system (which we are looking into also) for the body motion capture. There are some sort of cameras they are using to capture the faces. Here is the cnet article to that interview: https://www.cnet.com/news/how-injust…ng-game-genre/
What I am getting at here is I have no idea where to go from here because everything feels “locked away” and secretive and I have to dig through websites and analyze small thumbnails and have friends analyze AAA game’s models to get anywhere.
How do you feed markerless facial data from a camera into a facial rig that has 80 bones? Where can that hardware or software be purchased? Is snappers using some sort of hardware to do this that they sell you or is it instead a service and set up the facial rigs for you? Does snappers basically tell you all the hardware to buy and setup your facial rigs for you? Where is the scientific research on facial rigs that netherrealm was talking about?
I think if this information was out there for smaller studios or indie devs to find it would be beneficial to everyone and make a huge impact.