Read it all and those descriptions of Claude are quite funny .
I haven’t used any AI yet to assist me in programming but that description is like having a LVL 80 wizard in your party who can AOE on you at any time, blasting your last 15 save files to bits.
I do think those AI are getting smarter than the average person around me every day, yet they still can’t solve specific (small ish) tasks with a 100% accuracy, which is where the danger lives.
What I’m asking of AI is: “fix it”, what I get is: “Maybe”.
Either they are not allowed to, or they can’t be accurate as they don’t use specific libraries to solve their tasks (math this, logic that). And they’d be set up to respond in a positive mr right response regardless.
Being a developer, I want an accurate and direct response, not a disaster wrapped in blankets to make me comfy response.
And… If it would add to my code, would it make it a contributor? how does that even work in law? I can’t stand law complexity when I want to focus on code.
Meanwhile, I’ve seen how quickly AI can generate snippets of code. If (*when) it can understand the complexity of how to combine code to get from IN to OUT accurately then the interface of instructing an AI to generate code is much faster than me on 10 coffee typing on my keyboard. Just how writing with ink on paper upgraded to sending emails to your contact list.
At this point I’d be surprised if a screenshot of “blueprint code” would translate to anything meaningful. Unreal Engine writes a lot of bloat to blueprint UAssets of which some is essential and some should not be, yet is written (such as node positions). On a screenshot this data is not visible, which could possibly lead to asset corruption. Further lack of context would result in corrupt nodes. For example, nodes of which the direct context is not available in online documentation (LibraryXXX:DoThisMethod) would not be extractable from a screenshot of a node. The AI would have to guess, which is unnacceptable.
However, blueprint nodes can be converted to text and back, making them somewhat more accessible to AI. For various reasons mentioned in many of my posts explaining the downsides of blueprints I’d curse this approach.
Especially in these times, AI could be the direct interface between developer and end product.
Previously, blueprints was the alternative to code, to people who had no experience in programming or could not write code for health reasons. To professional programmers, blueprints are a limitation. I despise using AI as an interface for an interface, aka blueprints. I want to get directly to accurate results.
It’s been a year since I’ve been here (see my previous post)
and I think (from attempting prompts at ChatGPT) that at least that system has become more aware of historic (prompt) contexts. See, way before ChatGPT existed, AI were widely used for smaller tasks. Some AI out there were used to summarize texts, or do small chats and do small tasks like that. When your chat buddy AI goes nuts, like forgetting what you said 10 lines ago, or rambling nonsense, that happened a lot more years ago, and years before that. This context complexity, the amount your AI realizes it has talked to you about, seems to allow more complexity every year of the developments. When you write code, you can’t allow anything to go nuts at any time. If there’s not a 100% accuracy, you get a compile errors, undefined behavior, bugs, maybe a program fully different than you asked for (if lucky). That said, if not 100% accurate, it’s also your closest enemy.
If you write code using AI, I suggest writing a highly modular, testable approach of code. a “plugin” approach of parts that stick to their own function that you can verify yourself. Does it work correctly? Does it do exactly as I want? Don’t ask it to write 20.000 lines of project.
There’s a risk letting anyone take care of your computer.