MOTION IMPAINTING is this the next big thing in Animation?

Just starting to get a grip on Motion Matching thanks to the help of the Game Animation Sample.

But, is Motion Inpainting going to be the next big thing in animation?

1 Like

Its AI bullsh*t, it’s got nothing to do with motion capture - OR motion matching.

0 animation skills - 100% artifacting.

And sure, its cool that AI can do any of it at all, but its not going to replace what AAA standards for animations are - and it never should.
It’s a temporary fad that can get somewhat decent results.

It’s never going to (and probably isn’t meant to - as in the people writing the papers would likely slap you, and the maker of the video, in the face for even suggesting as much) replace actual Mocap and Artists.

I must fully disagree. Currently AI is actually used to filter out artifacts generated by techniques such as motion capture (by suit, camera, etc.) to stabilize and perfect results. This can go beyond just smoothing out graphs with a “dumb” filter (which is destruction of detail and precision).

https://youtu.be/7aJQlOe1QTA?t=374

Cleaning up both animation from mocap and vertices on mesh model (actors are often scanned!) takes a ton of time to do by hand if you do that manually.

https://youtu.be/_QZN2IC0vOo?t=48

AI is also currently in use by projects like NVIDIA DeepLabCut which allows for markerless motion capture from camera captures. This means mocap without suit and without markers. It has replaced suits. It has practically already replaced actors. Better said, it has no need for actors cause it uses media of the actual thing. Markerless mocap is a HUGE deal in rapidly collecting data from all species, which animators and machines can learn from, even those too small or big for past mocap (from ants to large machinery). So much data a human can’t begin to process, as this data not only just records animation, but also behavior. Behavior reflects into animation in daily life. This behavior should be part of planning out how animations are allowed to combine and transition from tail to teeth.

GitHub - DeepLabCut/DeepLabCut: Official implementation of DeepLabCut: Markerless pose estimation of user-defined features with deep learning for all animals incl. humans

The current generation of human animators seem to avoid that topic entirely, “humanizing” their characters in behavior, and with that, animation. Doesn’t matter if their style is cartoony or hyperrealistic, they tend to screw up. Their ferret model animates like a dog and sounds like a rabbit because they just didn’t know how to do it, disappointing viewers. Only the pros do it right (sometimes, rarely) and just see how much manual work that takes for a human to even begin to process (look at the muscle sim part too!):

https://www.youtube.com/watch?v=tm-NXVDZZTU

A while ago I wrote about how I see the future of DeepLabCut or similar implementations to allow for easier animal mocap. Currently that project requires multiple cameras to film an animal from multiple angles at once, then the system estimates bone positions in a pre made skeletal mesh. How I envision the future, is that AI estimates the skeleton position entirely (including obscured bones) from a single camera angle video. This would allow to turn any video found on youtube into procedural skeleton, pose estimation, mocap, opening up a new generation of animal animation possibilities. The AI to complete this are already out there (github), but somehow not yet combined into that vision. Integration of DeepLabCut with UE or Blender3D is still non existent as well, and mostly used for research (reading mocap data to study animals, disease behavior etc.). From an artists point of view, someone who still wants to create things by hand, this data could provide quick access to information such as (skeletal mesh) bone transform constraints, or quick animation of “non artistic” things like the gear movements in clockworks. Or, find accurate wing movements of tiny insects (complex! Look at dragonfly). Collecting so much material yourself and studying it is just too much, so let software do it for you.

https://youtu.be/oxrLYv0QXa4?t=280

Suddenly having access to a sh"t ton of mocap material from video naturally leads leads to automated generation of more complex systems, like procedural walk cycles. Machine Learning has already been used for many years in robotics to teach robots how to walk in a virtual environment (teach 2000 instances of robos walk a bridge, discard the data of the failures, upload to real robo). Sometimes self taught robo movements alone led to funny situations where the machines began to learn to walk or run in silly ways. Most likely both the virtual world (the game) and in robotics as well we’ll see improvements exactly because we have AI.

There are many ways I see AI will improve or fully generate animations. Inverse kinematics will be an interesting field to combine with AI. Currently an animator / programmer has to pretty much hardcode IK to work as intended to alter animations to work in games (moving hands to rifle positions procedurally etc.). AI could be used to procedurally inject IK in such situations without programmers having to hardcode a ton of things into animation code (like a hand picking up an apple in 10 ways, sucks to do).

Imagine having to do this IK properly on fingers… for a “next gen” game. Nope nope nope:

https://youtu.be/x8I2Xi8bRL8?t=1

Adding to that, animation is a really hard field for humans. The old way, without mocap, is moving bone by bone. I’ve done entire cutscenes like that when I worked on older game projects and it’s super time consuming. If not done properly, it will look much worse than what we already see generated by AI. If done properly by a human, it can (currently) be much more accurate. But then we’re talking about the animation gods who do muscle simulations in Houdini and such.

When we get to the point of using muscle simulation for realism to add detail beyond what a human can add, why wouldn’t we use AI? You don’t have to generate the entire character animation with AI , but you can add to it. Just like Physics simulations and IK can be used per bone. It would be great to use AI for several smaller systems of the whole (what they’re specifically trained for), like procedural movement of parts of a human face during conversation. If I’m not mistaken, there’s already such a project out there automating lip sync and face automation to the sound of audio played when a character talks. There are plenty of AI projects public on Github which scan sound and written text for human emotion as well. Works just fine. I used AI to summarize texts and scan for those things years before ChatGPT or common AI use at home was a thing.

Only when the context becomes increasingly complex, like performance of the full acting in a movie or something, of course the need for a broader intelligence is required than we currently find anywhere. Will come eventually.

People are already calling themselves the artist… the creator… etc. when just rambling at an image generating AI. And of course, hated for that. There’s not a single sign AI has reached its limits. I say, if we can do it, a machine can do it. We’re just machines of flesh.

At some point people should be able to just sketch a game level in Unreal Engine as if drawing with a pencil and let AI fill in the entire world, iterating on a “pencil sketch”…

By then artists will realize that the ways they create content manually (Blender3D, GIMP, Houdini, add them all) are just interfaces between human and PC that are terribly slow to learn and perform to get to the final exported content, which can be skipped in whole. While a computer program can operate keyboards and artist software like we do, they don’t need to.

AI can still use function libraries of existing software to accurately do the maths or use algos we made, but they don’t need the time, interfacing hard / software that we need. Existing software libs will be used mostly until more advanced AI arrives.

To give you an example why, ask any current image generating AI to do a delaunay triangulation and it will fail as it doesn’t use the actual algorithm. All they provide is an estimation, not an accurate result. Same goes for ChatGPT, ask it to do math, deduplicate, sort, it will fail. An AI trained to do use our software libraries will do it properly.

What’s the point of millions of game devs individually learning Blender3D in depth for years (calc the time required), if all you need is one robo with an understanding of python, (still advanced) context, running blender’s python libs directly?

Now we perceive the things we spent a long time on learning and working as high value (in knowledge and product etc.) while it’s just an inefficent road to waste time as an individual.

https://huggingface.co/

Another thing I’d love to see AI do is (legally) hacking content on the old games to update them to modern standards. A lot like what we see in Skyrim mods, like people talking to NPC’s using speech type AI, or giving NPCs commands to do X or Y. There are tools humans use to inject their own code into apps that they don’t have the source code for. That’s how for example, the VR mods are injected (search Praydog, Cybensis). There’s also memory hacking and other ways, to inject functionality and content into games. Like the texture replacer for Borderlands. Plenty of stuff. Now think of it like this: What if we use AI to inject such content at runtime, fully automated? It’s already done in various projects where a player can talk to NPCs. But think a bit further: let AI hack content at runtime to provide subtitles to the HUD in games without subtitles. Let it inject strategy into old RTS games where until now the NPC “AI” was dumb as a rock. Or more advanced, let it inject new game events into Fallout New Vegas? And to be on topic, let it fill in the gap in animation between (time consuming, limited) hand made and (processor heavy) physics simulation, for example, in bodies of water, smoke, fire, particles, flock behavior, soft bodies etc.

https://youtu.be/4V4usrKVqz8?t=1885

GitHub - cybensis/TormentedSoulsVR: VR mod for Tormented Souls

Next I wonder about the availability of hardware for future artists. I can’t currently find the name of it but I thought NVIDIA is working on a GPU “alternative”, a chip specialized just for AI, whatever that means for the software that will be compatibile with it. If we are heading that way, perhaps the need or availability for current tech wil drop at a faster rate. Software that we are currently running, including the data formats artists create, will at some point no longer work on new hardware. Just like we can’t expect just any game from the 90s to be compatible with today’s GPU driver or chip.

Next up I wonder when the government will mark advanced AI running “at home” on advanced hardware will be a threat. I’m moving ahead to 2035 with this, but my theory is that at some point such hardware / software will be monitored on the hardware level (by AI), or even made illegal. All the popular OS out there already scan content, speech, biometrics, hardware sensors, voice, keyboard etc. but mostly through internet connection, because it requires hardware specs and because they want the data. At some point advanced hardware can be as effective as a little army, operated by just anyone. Security threats. A lot of freedom the tech offers is to be taken away to fit their rules, even if just because the system, people in it, and law can’t keep up with tech developments. If not ruled / made by tech in the first place, how could they.

If they track location, why not track everything at the cost of a GB or two of memory usage. Truth is, you wouldn’t be able to analyze what data is tracked and how, just as with current hardware (read on Intel ME, AMD ATM etc.). “at home” labs hardware modifications have not been able to get around that.

Nvidia GPU tracking tech proposed by US lawmakers in smuggling crackdown | Tom's Hardware

What does that mean for artists? If they either do not have access to “old” hardware, or the latest tech? I don’t expect the market to offer “old stuff” to them, but who knows? Will we have artists? One option of course, is government controlled hardware which people use “in the cloud”. Running games or developer software on a server far away, from a “potato” PC at home with a good internet connection to connect to that cloud. This whole cloud idea already exists, but people have had bad experiences in some cases. For example, existing remote control software (like TeamViewer) can come with delays in input, delays in video feed, security challengs, unsupported software (black screen), or input keys not working (like when holding one down). Similar software used for gaming hasn’t really been received well so far. It’s got a long way to go.

Then the question is, for how long will we still have artists? The idea between clicking a button to create X and going through 10 years of human learning to create X (I have heard) is already causing the new generation to avoid schools which teach game design and such creative fields. Many people I know are the creative types (3D modelling, photography, game dev) who like what they do (and probably would continue doing so) but at the same time increasingly doubt if their field of work is still worth it / useful / the right thing to do. If the output of those artists is fed into AI continously for years and years, at some point it might feel like AI generates an endless stream of “new” from it. There would just be so much to pick and mix from that you’d hardly get any similar images. Currently when you ask AI to create something specific like a logo of a fox on a hurricane, and a logo of a cat on a hurricane there’s a big chance both results look like Firefox icon or a Ubisoft logo. That’s just because of a lack of content and context complexity. That’s how people recognize their own paintings and styles AI was trained on, and what makes them angry. Currently, you are already being spoon fed content offered by AI (search content, recommendations, ads, name it), and that has been the case for who knows, 10 years? 20 years? AI research goes back to the 60s or earlier, including research on animation (I had some papers of that time, I think it was about leopard movement).

I’m rambling on and on about something moving away from the animation topic a little, but there’s a lot changing and a lot of fun developments right now.

People don’t avoid schools (or unreal) because they (the schools) work, quite the opposite. They rightfully avoid schools because all you get out of it is communist indoctrination. True for Europe. True for US. Im sure its also true for Russia and China. Definitely true in Australia.

You’d be hard pressed to find a single instution that checks and works based on merit. Because just how dare you have a different idea than what your school book tells you to have?
Scientific discovery is therefore dead when it comes to educational instutions.

The real improvements are left to folk like us - the ones actually working.

That is absolutely no excuse for not knowing what you are trying to animate or create.

Unless you think that somehow feeding 100 images into a computer software you didn’t even code and 3d printing a new version of the David is in any way shape or form equitable to spending half your life doing actual work and research to then sculpt the actual one of a kind thing.
If that is the case, I dont think you have a right to have any optionion at all. Regardless of who agrees with you. You should be stripped of any and all freedom you may have and be forced into manual labor so as to learn to appriciate the knowledge which has made it possible to lessen said manual labor load over time.

So no.
AI in this setting is not beneficial.
For reasearch purposes - assuming the researching is done by someone with enough of a critical tbinking brain - maybe it has a place.

You, complaining or not being able to go from a 3d scan to an actual model manually is a you problem.
The customer not wanting to pay you for said work is also a you problem - one which many and I dare say all artists have faced before.
Using AI as a solution to cheapen the expense will only directly lead to cheapening the end product - whatever that may be.

You simply cannot replace knowledge and its acquisition with AI.
I mean, Sam Altamn wants you to try so he can take all of your money, but you’ll probably get to see just how quick that’ll be shut down in your own lifetime.

That said, im not 100% sure why you waste time writing fantasies on this forum. Particulalry when you could be using said time to fo something useful like studying anatomy :upside_down_face:

From a research point of view AI is already speeding up developments in math, physics, medicine and whatnot (I could link some cool stuff if you’re interested), acquiring new knowledge all the time. (detection of Parkinson’s, cancer detection, optimization of existing algorithms, developing new medicine.) Doctors don’t know sh"t about their computers or programming but need their software all the time to be efficient.

The speed of discoveries by these AI are comparable to how great mathematicians would spend all their life solving one problem which takes today’s computer a few milliseconds. I’m not saying we should quit learning skills ourselves, but I am saying that spending tremendous amounts of time on a task just because you can does not make the result more valuable.

About schools: Here I’d only recommend some university. Otherwise students learn basic (often incorrect!) stuff they won’t use 75% of for the rest of their lives as well. Incorrect “facts” in physics, chemistry. Tons of religion (can barely avoid it). 75%+ of history is war related and missing details. Most people know multiple languages and at least 2 they will never use. Math? Teachers are so bad you’ll just learn from Youtube instead :slight_smile: . Programming? IT? Security? Psychology? Basic medicine? Basic survival skills? 0 people learn (and teach) it early on. They are considered specializations for people usually already in their 20s (ridiculous). Libraries (schools too) don’t have the books (like 3 educative books among 10.000 nonsense filled ones). Bloody shame how education goes downhill. I can talk negatively about it all day. Offtopic though. Most of those students they will do nothing else except pushing a button or two to get to results, you know that. Hell if I know how they stay alive.

Anyway, what’s to be done about that. I actually have some hopes for AI to be used for personalized teaching (instead of 1 teacher 30+ people), as long as they don’t teach the average crap. A lot of “specialists” I meet clearly don’t dig through research papers themselves, or don’t get through the basics on their own. Probably not knowing how to get to / through the info or ask the questions.

On the other hand, I see specialists working half a lifetime in the medical field with their own specializations and good education fail to do their jobs. Like specializing in one organ but failing to diagnose disease because they can’t diagnose other connections in the body (the brain, the immune system, psychological effects and so on). One doc sending you to the next. Can’t blame them for not having a ridiculous amount of info in their heads. This is another case AI (already) provides and enhances information to prevent disaster… Lots of people die of wrong diagnosis, wrong meds, meds OD, communication error, and just dying in waiting rooms and that all happens inside hospitals. An AI assistant trained on a f"ckton of medical info is definitely going to save lifes there. Docs which know everything and make no mistakes don’t exist. And getting 5 docs of the correct combination in a room at once is a fairytale.

Well we’re both here in a discussion of how to animate a fake virtual world. Nothing but fantasy lol :upside_down_face: . There’s going to be 1 in 10 million who give a crap for 5 minutes how much time we spent on a 3D model, and just want to see results that give them a quick feel good moment, or makes them money.