Hi everyone, I am new to all this and looking for a quick starting point without starting/learning from scratch. I want to create short educational videos using a realistic 3d character to present the content that I record on video. Is there a place that I can get one that is 99% set up with all the technical side to do this? What ‘set up’ would I be looking for in a character for this purpose? Thanks in advance, Simon
Hi Simon,
I’m not sure I fully understand the portion about video. If you’re looking to extract motion from a video, there are a few ways, but all have caveats.
One of the easiest ways to get a pre-made 3D character would be MetaHuman and MetaHuman animator in Sequencer. Alternatively, a store asset that supports the ARKit FACS blendshapes should work.
Animating the face can largely be done with the LiveLinkFace app for iOS, and animating the body if wanted can be done in Sequencer via Control Rig.
Hi Chris, thanks for the reply, here is a little bit more information - I want to record the content myself and have a character then take the speech and ideally the facial movement to make the video, it needs to be gender neutral so I am thinking about a generic ‘friendly’ alien type character to use, I don’t have an iphone so that will need to be taken into consideration. I have Blender/Unreal/DavinciResolve etc to use as tools (slow learning curve obviously) but if I can get the main character pretty much all set up I can work the rest out as I go. I am happy to pay for a good asset (within reason) to get this all going, thanks again, Simon
The term of art for this is called facial or performance capture, depending on the scope of captured movement. There should be tooling available for most 3D software packages that will support doing this to some degree.
The quality of the capture is largely based on the types of data being processed. Stereo video with painted tracking markers on the face is still – at least as far as I’m aware – the gold standard. The LiveLink app uses depth information in addition to the video to increase quality over what would be possible with just video alone.
In practice, that just means that there may be more cleanup involved with some techniques than others.
There should be a number of assets on various stores that support Apple’s ARKit blendshapes, which UE has very solid support for. Failing that, the basis of ARKit’s blendshapes is the standardised facial action coding system (FACS), so most any asset that adheres to that system should work.