1 and 2.
Bake as in “freeze in place” with whatever morphs applied.
To answer your other question - since in 4.x the pipeline for cloth doesn’t allow for morphs (blendshapes whatever), you can levarage the material with the same effect and some work.
Make a texture that defines the area you want to offset and how it gets offset (a heightmap essentially), then you apply it in the material to the vertex.
The End result is quite the same - you get the animation driving the object, and then a recalculation based on the shader.
The difference is, instead of hacking apart the engine to merge blendshapes/morphs and cloth you went around it.
The downside is the cloth simulation math is only applied Before the material calculations.
Now, for cloths that’s both good and bad. If fat to skinny you have issues since the clot animation is done on the fat. If skinny to fat it works because the material stretches the original animation out and the simulation will likely still fit.
For mixed situation like you are asking, the best way is to split up the mesh. Cloth is separated out. Faces are separated out. Usually even body parts if you have a modular character.
For performance sake you likely want to implement runtime C++ mesh merging on this - but usually the face has to stay separate to run morphs/expressions (mostly due to ease of use? You don’t really want to do single takes with the face involved without a Green screen room and 3 cameras to film markers. Trust me on that.)
Workflow/steps could take me a month to write - and chances are you won’t have the same tools, and neither would I recommend anyone buy them because truly they suck.
The easey part.
Make a character mesh in blender. Weight paint it. Make it look as close to the actor as possible. Make it move well.
The harder part: Rig the face with bones. And weight paint it well. Then take your actor’s picture of the 52 poses required by the system, and start creating keyframes modeled/adjust weightpaint to have the pose match the image as closely as possible. Once done, you bake those as blendshapes (there are scripts to do this publicly avaliable).
With that, disable the new unreal importer BS, enable the legacy one, then import the mode(s) into unreal and assemble - do not use the main project. You just need a take recorded set up (essentially all that the latest engine versions are good for, so engine version isn’t critical here).
With the model assembled, add livelink to the model and set up for face capture (off an iPhone 12x likely, via live link face).
Optionally and I suggest no one does because the system/quality/cost sucks and isn’t worth it. You hook up to Rokoko for motion capute. Live link will allow different options - they all mote or less suck but some are better than others. Generally the inertia stuff doesn’t woek, so rokoko, sisense, etc. are a waste of money or so low fidelity that you have to put thousands into cleanup on top.
At this point you start baking your animations or face capture scenes, save them. Export them out to filter and fix in Blender.
Then save from blender (keep all your files external to the engine). And once you are happy you start the process of importing the clean animations into your acrual project.
At thia step usually make sure the animations sit on your C drive and that you edit them/re-import them with ease.
With all of that, you start to assemble the character and putting together whatever scene you need. This part is really just whatever you need to do with it in engine…
Cloth etc is much the same as the above.
You import it in, set it up as a separate character if you want precision/perfection - or set it up as modular swaps.
I do the modular swaps with a csv sheet that drives what goes where.
Example > Tunic - no chest, no shoulders, no torso, no pelvis, no thighs, etc.
(So the csv is a list of all the parts that make up the body that get removed or included on each given asset).
Thousand ways to do this more or less right… and it’s barely scratching the surface. Still. Should give you some direction.