Hmm, not sure if you’re being sarcastic, but here’s my take as a programmer with some background in machine learning: full replacement isn’t coming soon, despite all the hype and recent improvements. Nobody can predict the exact timeline, but there’s still a lot standing in the way.
The first issue is that it’s far from profitable right now—companies are losing massive amounts of money on infrastructure, and the only thing keeping them afloat is investor funding. Their primary concern in the coming year won’t be making the models smarter, but reducing costs. They’ll make announcements, sure, but honestly, those will be incremental improvement, not big leaps like we saw with the release of ChatGPT and the creation of o1 last year.
Coding agents still have a long way to go before becoming fully autonomous. They often add things you didn’t ask for, hallucinate basic mistakes you have to catch, and require re-prompting to fix them. (Though honestly, writing a prompt to fix that is just lazy if you already know how to code.)
That said, with clear and precise instruction (like asking for specific classes with defined feature) they can produce results that are surprisingly solid, sometimes needing very few fixes, if any.
As for 3D modeling and illustration, that’s a different story. I think eventually it will be able to deliver proper topology and texturing most of the time, if not all the time. But using prompts as the sole method of control makes it really hard to be original or expressive. If language is the only lever, you’re bound to see the same art styles over and over again. And once that happens, I doubt it’s the kind of “art” people will want to keep engaging with, it’ll just start to feel repetitive. Look at how superhero movies have evolved over time.
That’s why creating AI tools that feel like an extension of the artist, or any human worker really, should be the real goal. That’s the future, not images generated by prompts alone.