Download

Paint Face - realtime webcam material for remote Actor Performance Facial Capture onto CG Character

Hi Unreal.

I am looking for a project lead to cede a new concept of facial capture in exchange for unlimited lifetime access to this tool if a savvy tech can make this proof of concept work. I am willing to cede all rights to you, so you can exploit this as your own venture.

Why? I write screenplays, I don’t have the inclination to develop tech products. I want a tool to make cg movies faster without facial capture. I want the actual actors face in a cg world.
I want a tool that indies can use for free up to two actors, one director and one DP in realtime. Your market might be charging a small fee to get access to directing 3 or more actors in realtime.

This is a new kind of facial capture with a regular webcam, but with almost no facial rigging. I’ll give you an idea of the vision.

Lets for now call it : Paint Face - Rough Concept Screenshot below.
Imagine directing your actors over a webcam on a virtual film set in Unreal. How? Send them a small realtime multiplayer build of a specific scene that hooks into their webcam. The engine tracks only six facial parameters, face tilt, face yaw, face turn, face forward/back, jaw and brow. Later add more things, but for now those parameters will do for now.

Question is how to get facial data from this? By creating a MATERIAL of the actors webcam feed placed onto the face of the cg actor in the scene.

How would this be used? A multiplayer unreal scene say of 3 actors and 1 director is sent to the participants. The actors can see the cg counterpart of each other, but can’t see the director. The director can place markers in the scene and mark out curves and lines for an actor to follow, and everyone can talk via streaming audio in realtime. The director can make himself visible when needed, so the actors can see him between takes

When it comes to recording, each station records a local copy of the mocap data that uploads to the director later, besides the realtime mocap data stream during multiplayer streaming. This way the two streams can be compared later and any holes filled due to latency issues.

Usage case: remote webcam rehearsals, auditions, short films.

Perk: If pulled off right, the nuances of acting come through because it is the webcam feed of a REAL LIFE actors face, not a cg face.

Other features: After market led array for flat lighting of actors, so an actors face can be cg lit and blended into the scene with shaders at a later stage.

Issues: getting key features of the real actor onto a simplified cg head, such as nose, brow, jaw.

Possible libraries that can handle some of the heavy work: Perhaps OpenTrack can track the head movement. Head movement could be blended with keyboard mouse movements, example, “W” pressed to move forward, mouse for BODY DIRECTION, but as Character turns his head, the CG character starts to look in direction of turn. If head keeps looking one way (ZERO = HEAD FULLY FORWARD), it starts affecting body turn, until heads stops (RETURNS TO ZERO).

Where markers are placed, when character hits a mark, a gui icon goes green, so he does not have to look at the floor. ETC. Drop me a pm for more info. I want to direct this as I would direct real actors in one of my films.

If you got a heart for the end-user and want to go full GNU GPL on this project, You are going to win me over. I don’t care much for the money. I do not want this tool to be released for the BIG STUDIO SHARKS who put VFX companies out for business. I do want the underdog film makers to have a cool tool they can just plug and play and start directing with any actor behind a webcam from anywhere in the world.

If you need clarity, pm!

Drop me a message to start the discussion. You heard it here first on Unreal Forums.

Regards

Lee

Human Being, Screenwriter, Indie CG Filmmaker
[EMAIL=“compusolve.rsa@gmail.com”]compusolve.rsa@gmail.com

FAQ

Q: Why would anyone want to build something so stupid?
A: Because it will run any any stupid generic webcam without CUDA, which means the numbers of actors you can access this way is as big as anyone who has a moderately powerful laptop, webcam and internet connection. Next question.

Q: Why would anyone want to build something so stupid?
A: How many actors can afford an iphone X? Dumb question, next.

Q: Why would anyone want to build something so stupid?
A: You’re not a film maker, you’re a programmer or a vfx artist? Next question.

Q: Why would anyone want to build something so stupid?
A: Actor access via webcam. Some film makers can tell a story this way. Next question.

Q: Why would anyone want to build something so stupid?
A: Covid-19. Next question.

Q: Why would anyone want to build something so stupid?
A. Me write stories, the then produce stories, then direct stories. Me prefer to work with real actors, but me locked in my house because of teensy weensy virus.

Q: Why would anyone want to build something so stupid?
A: Barrier to Entry is minimized. Next question.

Q: Why would anyone want to build something so stupid?
A: I will NEVER own an iphone X. Not by choice, they are just too effing expensive.

Q: Why would anyone want to build something so stupid?
A: I want ONE copy of my face firmly planted in reality. I do NOT want a copy of my face on Apples servers. That question was REALLY REALLY dumb.

How do you plan to do the mocap mentioned? Suits? And a webcam at same time? Or are you referring to face capture as mocap? If so where does the body animation come from?
if this is body mocap then how is it cheaper than an iPhoneX?
And you think your mockup looks good for the movies you’re planning to make?

Proof of concept for now. No mocap, just use the third person mannequin as the character and slap the webcam feed on its face. The mannequin is controlled as usual via WASD and mouse.

If that works the next step is to get something like Opentrack to track the head movement, and based on how the head moves to affect the body of the mannequin, while the mannequin is still controlled via WASD/mouse. That’s a good starting point.

Start here: webcam to Unreal material. https://youtu.be/FpUkS0m7_no
Then you to modify mannequin model to take a different material for the face or attach a sphere over the head.
Will rely on person keeping focused on the webcam and not moving head.

Pretty much yeah.

BTW, this youtube tut is not reproducable on UE4.24. Asof said version UE forces you to select which stream number, and UE defaults to stream 1, even though stream 0 is visible in the list. A bit of retarded engineering for an otherwise cool engine. See screenshot.

I got no desire to spend my life learning Blueprint-hell, considering how many blueprints get broken between versions of Unreal. I write and direct, and tearing myself away from those two things just wastes my time. This is the reason why I am offering this concept to someone savvy enough to take this over, make it their baby, if they want to productize it, they get full control, and I get a tool for life to tell stories as a cg filmmaker.