I’m looking around the pose detection systems and libraries and i wonder if there is some tutorial or open source code to create animations based on videos (for UE5) or real life camera video.
My reference is this library:
which i saw it’s used to get some video file and get the poses and “bones” positions, but i don’t see how that can become a pipeline until get an animation for UE5. Any help on this would be apreciated.
I want a free/open source solution, so it’s not really my option. One of these links adds some light to the issue, bc he uses the MediaPipe information as well and he maps the Manny skeleton.
I can imagine process a video using Python and extract all these Landmarks positions but, what i can’t get btm it’s how to generate that animation (UE5 asset) knowing all that (having the file with the dataset on the project OR saving an asset file outside the Editor)
these keywords are too broad. trackers are for tracking visible parts, “pose estimation” is what you need for invisible parts. Such thing requires an AI model and can be used to estimate where all the bones in a skeleton of for example a cat are if you only see one side. Plenty of research has been done, DeepLabCut looks promising but there seems to be no Blender or UE integration yet.