I am looking for some advice from people who have already worked with the USkeletalMesh class in C++.
For a project I am working on I have to generate skeletal meshes from source meshes at runtime. Animations will be done with a custom AnimationInstance implementation.
Basically I load a simple skeletal mesh, which serves as a template, and create a new skeletal mesh in C++ with N (N=around 200) bones.
The template mesh is repeated N times. This way I need only one single SkeletalMesh to render hundreds of animated simple meshes.
I managed to get the generated mesh to render( with some inspiration from the SkeletalMeshMerge class) but I am facing one little problem, which has to do with the reference pose of the mesh.
Each USkeletalMesh contains FSkelMeshChunk and FSkelMeshSection entries.
A FSkelMeshChunk entry contains two arrays:
TArray<FRigidVertex> RigidVertices; // influenced by a single bone TArray<FSoftVertex> SoftVertices; // influenced by multiple bones ( blend weights)
My question: Are the vertices stored in FSkelMeshChunk transformed to reference pos space ?
Looking at the SkeletalMeshMerge.cpp, there they extract the vertex position from the vertex buffer and not the FSkelMeshChunk.
const TGPUSkinVertexBase<bExtraBoneInfluencesT>* SrcBaseVert = SrcLODModel.VertexBufferGPUSkin.GetVertexPtr<bExtraBoneInfluencesT>(VertIdx); DestVert.Position = SrcLODModel.VertexBufferGPUSkin.GetVertexPositionFast<bExtraBoneInfluencesT>(SrcBaseVert);
Do I have to transform the vertices back to refpose(source mesh)->bone space->refpose(new mesh)?
Thank you for your help in advance