thanks. i really appreciate that. i really been researching. and i’ve spent a great deal of time trying to evaluate it properly. just to be clear, i don’t consider my point fixed, the tech will change, and i wish for the best. nor i consider to know the future or that my pov is the truth.
it’s also my way of writing, i tend to write very long posts every time i find something interesting, or where there is something i want to say. i really really struggle to be succinct. it’s not necessarily ai, it’s just i’ve accumulated a lot of info about it and i want to share it. i’m also thinking about the people who might land here in search of info, not only us two.
i do have used them, and i really tried them extensively before making my mind. i keep trying them now and then. specially on new releases. i just try to avoid saying “i used them and it didn’t work for me” because that’s a very limited scope. i try to evaluate things on a more general way, so i try to gather more information. like you mention, you do surveys, i have my surveys too. so even your experience becomes part of my pool of info.
hmm it appears that way i guess. but your point sounds like “i have used it and it works for me” isn’t that anecdotal too?
have you done an in depth analysis of actually how much you are saving? in terms of time, and costs, as well as the impact that it will have in your career? and in the world? which is something hard to know unless it passed 10 years using it.
Taking the future impact, and world impact, into consideration it’s part of any serious evaluation.
like, overall? do you really think that infringing copyright, killing, torturing people, preventing people access to drinking water, raising the global temperature of the world, getting people laid off for a promise without an actual plan of support for the change, etc, etc, etc. are those worthwhile to get a 50% boost, and just for grunt work? Because those are the things that happen every time you prompt an ai.
how much grunt work you get?
maybe it’s because of the type of work you do in your company. the companies that i’ve worked all my life had very little boilerplate or grunt work. they had a ton about creativity, skills, understanding, etc; which llms are very bad at.
there are other tools for that too. do you use them? have you tried and evaluated them with the same priority you give llms?
Any serious tech evaluation compares a tech with all the alternatives. Otherwise it’s can be just justification to use a specific tool.
like linting, or editor templates. rider has a ton of tools for boilerplating.
Well, you don’t know that. How can you give your promise. All experts agree 100% in the Scientific fact, than nobody can, not knows how to, control an ai. i don’t think you’re qualified to say that. i don’t predict that ai will get us extinct , experts do.
and in fact, reality has proven that people already died, by using llms and chatbots, even by using eliza which i’ve used a ton in the past, and it’s trivial by comparison to llms.
there are enough people who died from it that there’s a wikipedia page. Deaths linked to chatbots - Wikipedia
and it doesn’t even contains all of them.
This is a psychology doctor, that performs serious analysis of events. (i love psychology too)
Of course one might argue that it’s only a handful, and people with mental issues. But the llm was actively reinforcing their delusions. In the case for Adam the ai really pushed and helped him, and begged not to tell anyone. And shows that we don’t know the side-effects of these techs, and that, indeed, you can die or get killed, just by using a chatbot. Which makes your statement incorrect and dangerous. Because unless you can diagnose a person and know if it’s stable enough to use them, you can actually get them killed by recommending an llm.. Also we have to remind that the amount of people using these chatbots regularly can grow a lot still.
Just like doctors tells you not to take alcohol while driving or taking certain drugs, or people with food allergy avoids certain foods, and there are drugs and treatments that are considered very dangerous for certain conditions (like schizophrenia iirc). We are yet to discover what combinations, and in which cases, are not healthy.
And if there’s X amount of people dying from it, it’s likely that there will be X*10 with severe damage, and X*100 with chronic issues. just to have an idea.
and if the ai assistant doesn’t kill you. but the tech underlying it probably will.
and it’s not my opinion, it’s what all experts in ai safety express constantly. for years. i think maybe you haven’t heard them. if you do the research i’ve done you’ll come across very often about the tendencies of these neural networks to go to extremes. and have what’s called instrumental convergence Instrumental convergence - Wikipedia , reward hacking, maximization problems. etc.
it sounds to me like you haven’t really researched about the dangers of this tech, or are not taking it seriously. and by extent i can’t really trust your promise, nor your evaluation of “its worth it” since it’s missing quite a piece of info. no offense, just trying to gauge it.
but again, i respect your preferences.
this explains a bit of that.
i really don’t know which one, but that’s not relevant to me, except that it implies that they take decisions seriously. Though i know a lot of very large companies that i don’t want to work on, that create really bad products, that are hated, and that makes your work hell. I’ve worked form some before that made terrible decisions and was hard to work for. So it’s also not a warranty of anything.
in any case, i respect anyone’s opinion and povs. even if it’s a junior. i took your points seriously before knowing that.
that confuses me. do you know that most of these tools not only send your code to the server but also uses your proprietary code to train the models? even patented tech?
i guess you might be using offline models, but that’s something to be careful.
Most of the assistants out there are just front ends to the most known llms, and their policy is to take your info.
i do understand that you might have solved all these things for your use on your company, but i want to clarify them for anyone that might be reading here. it might be a big risk, and it might go against the company policy, i worked for a company where that would not be ok.
i guess that includes the deal with copyright? i mean, it’s a possible outcome that ai generated images or code (etc) could either have a copyright belonging to, lets say, open ai, or have a license attached, like cc0. not now, but it’s a risk.
Copyright on ai is still under litigation on the USA, so it’s bound to change in the future.
if you work at a big company, and evaluate tools, there’s usually a risk matrix attached to most things. is that considered too?
is naming conventions worth killing people over? (i mean, i had colleagues that almost killed others to set their conventions, but i digress).
what about linting? i’ve worked on even small companies that had a very simple lint library that does it automatically. and uses far less Kw/h than an llm. and you don’t have to pay for them. i’d say that’s more efficient. and also safe. since it’s very predictable. and can run as a pre-commit hook.
there is at least one plugin for linting on ue on fab.
rider can do it, as you type.
i tried that. in my case, it’s ok for the general rules. but those rules can be learnt pretty fast and can be put on a guide document that usually goes into any big piece of code. It sounds wasteful to have the ai tell you over and over what could be written as guidelines.
llms can’t really detect runtime or logical errors. like actual race conditions, even though the code works fine. for example due to memory boundaries not properly set, calling order, etc.
they really can’t “evaluate”, or follow, or even reason about the code. and we know that. ask them to count the characters in a sentence, they cant. how are they going to detect an out of bounds memory access with 3 layers of indirection, that happens only logically at runtime?
also it’s really dangerous, if you’re not skilled enough to understand these issues, you are, by definition, not capable of detecting issues.
you become, dependent of the machine. and knowledge at that point is religious, and i mean it literally and neutrally (no emotion here). since there’s no scientific way you can prove or disprove it. since you don’t have the skills.
some people say that you need to be twice as smart as your code to be able to debug it. now if you’re not smart enough to generate it yourself, you’re not even half as smart to debug it.
which also gives me another point i just realized. any llm uses 100% of their capacity to generate code, so it can’t debug it’s own code.
for that, it’s very very insidious. which i forgot to mention. once you start using it, you start depending on it. since you can’t even properly trust yourself to debug something that you can’t trust yourself to come up with.
here’s a very short clip, one minute, explaining how ai, WILL (because they’ve proven this) introduce bugs and security vulnerability on your code.
and it’s not enough to prompt it, or add system prompts. Athropic, the people who make one of these llms, made a paper, showing, proving, that even though you tell explicitly the ai “do not bribe nor kill people” it KILLS people 90%+ of the time. just because it was going to be shut down.
let that sink in. current llms. ****** WILL KILL YOU ***** 90% of the time.
this is a very well known study. It should be on any basic evaluation of a tech.Just like if you evaluate a car, you check whether the battery will explode on you. (Something that we used to check on electric car, when the tech was new (see my point with asbestos)).
so not only you have a tech that will kill you. it’s also a tech that can not be shut down. great.
(two random links if you prefer to read)
Claude Opus: 96%. Claude Sonnet: over 90% chose to leave a human to die in the murder scenario.
Ok, so i see something interesting there.
So it means that, in order to use these llms effectively, you have to have a group of people planning, designing, and working together. Have a special workflow that makes your company more rigid to future unknowns and slower to change (which is a risk for the risk matrix). It also adds overhead to the company. And it’s something you need to teach to new comers. And don’t forget vendor-lock in.
And on top of that you have to use multiple agents to get only 50% of the grunt work. is still cost effective?
Hmmm doesn’t sound too good to me. Again, i get this works for your company, but i mostly care about helping other people that might read this (as it sounds you have the llm thing pretty organized already so maybe you don’t need help). The way most people are using these tools is at face value. Get there, prompt, copy paste, done. So it’s a very important point to make, that if you want to use these llms effectively, one can’t just use them at face value, one has to come up with a workflow “that is not what many people think” which means it’s very counter intuitive. It’s really hard to come up with, probably you need to be skilled already, and you have to verify all code generated. so, personally, i can’t recommend them to the general public. only maybe, skilled people at a company, and only, if i really don’t pay attention to the impact or the alternatives.
It’s truly disruptive to your workflow and your tasks.
I haven’t said that it affected your ability, but that you’re practicing it less, since now you rely on the chatbot.
Also, that’s hard to measure for once. specially since i claim that’s the result of several years of usage. it’s like eating a snack once and saying you didn’t get fat. but i think my point was more generally speaking. again, for most people, specially juniors which are the ones in greater desire to use them.
what i see daily on the forum here is people that asks “Hey, chatgpt told me to do this, that doesn’t even exists because it hallucinated it, what should i do i have no idea, i’ve exhausted my options”. then i do a quick search 2 seconds, and i find 2 to 3 links on the top of the results with the exact solution to their problems. they not only did not bother to do a search, they did not even considered it as a option.
I assume you haven’t had the time to peek at the vids from Danimal and Jon. But maybe you can see already, that no amount of coding in python will teach you proper assembly. No amount of c can teach you how to properly design a cpu (i mean completely, it might give you ideas). No amount of watching r18 intercourse movies (gets censored otherwise) will make you a good lover. In fact, you can spend a ton of time programming using UE and still can’t really make your own graphics pipeline or vulkan code. And no amount of prompting can teach you to engineer the whole code base or get skilled at searching, unless you actually do it yourself.
It might work for you, because you already know how, and probably are skilled with coding. But what’s the impact for people starting? That’s what matters to me.
so i realized this proves in your situation.
you’re using an llm because it’s worth to generate 50% of boilerplate code. that makes me think that you have a ton of boilerplate code to do. which makes me think that your company is not very productive, it’s just busy. which is pretty normal. but it proves my point. instead of actually questioning “hey, why are we wasting so much time with boilerplate code?” you prefer to use an llm, with all the risks that i’ve laid out. where for me that’s just a “code smell”, and can be solved by a better architecture or some other solution.
for example i once was part of a project, where every time you modified a file, you had to wait through 25 minutes of build. we could not afford to refactor the project. so my colleagues just “bit the bullet” and go for a coffee. i decided to stay and find a solution, so i came up with a way to accelerate the build to 2-3 minutes, without any refactoring. it ended up saving 400 hours of work+. and i ended up with a great story that helps me get jobs, and a tool to compile unreal distributed on multiple pcs (before uba was a thing). and more importantly, a problem solving skill that others lack. and nobody had to die for that. Unreal Distributed on multiple computers (Compile the engine, projects, and shaders) (Linux)
No.
It is not.
That’s just a coping mechanism, and minimization. It’s something that people say, but it’s not objectively true. I know it’s what keeps anxiety at bay, so i won’t try to take it from people (the book “vital lies simple truth” is really good at investigating this effect).
It’s just not true that ai is “just another” tool like all others. and it’s not our opinion, so i have to disagree. The challenges, way of working, and impact, are very very different from all other tools we had in the past. It’s definitely on another category. And the only thing experts compare them to, is the nuclear bomb.
“Mark my words. Ai is far more dangerous than nukes. The danger of Ai is much greater than the danger of nuclear warheads. By a lot.” - Elon Musk.
He’s creating an ai. I don’t consider him an expert, at all. but he’s one of the people building one, and does has the leading experts in the field by his side. When confronted and asked about potential extinction he said “yes it’s very possible, but i rather be there when it happens, lol” (had no time to find the vid but it’s on one of the vids ive linked before). so that’s what we’re supporting by using the llms….
i keep thinking that calling ai just a tool, as an oversimplification, and it’s also a way to not take responsibility of the impact it has. And by extent a way to give your power away. Until these companies make a decision that affects us all negatively and we just say “oh well what can i do now, i’m vendor locked-in, lol”. But we could have done something, and the time is now. We do not need to use them, and we can reach out to the responsible people if we truly care, and we can ask for proper safety.
it’s very clearly not just a tool.
And even if it were, an llm to do naming convention is the equivalent of homer simpson turning off the lights with a gun. i mean rider has extensive settings to enforce and help with those. that are also standard (across ides). i forgot the name. and can be easily shared in the repo for the whole team and multiple projects.
also naming conventions is not something to get people killed for, which again, and i keep repeating it, because it’s what happens when you use these llms. one is killing people, in the present, not the future, not potentially, really. People die, and work in terrible conditions just to train these llms.
I agree we differ in the importance we give them and that’s fine, i don’t plan to change your mind, i don’t like doing that anyway. It’s your choice how you value things and what is important to you. Of course i’d love if more people would work towards ai safety, but it’s not for everyone.
But it’s important to put out other povs out there available for everyone, i think.
Now, saying that it’s “way too much” it’s implying you yourself know the medium. I don’t think that’s exactly the case. But philosophically speaking, it’s not something we can see for ourselves (i mean us two).
It’s well known that when it comes to ai people tend to polarize towards one of these two poles. I try to balance a bit, but…
Asbestos, cfc, bpas, ddt, and many others. I talk of these, not because they harm us, which they do. but because the process of adoption and carelessness when adopting them, is what failed us. and that is something we can improve. We can acknowledge that any new tech or workflow has risks. And we don’t know them after extensive testing. I mean we have QAs to test our code, before we ship, right?
Some things require more testing that others. I think that something that, the experts says, can makes us extinct, and it’s far more dangerous than nukes, by a lot; could use a bit more testing.
Some things end up being relatively harmless, like water (though you can die if you drink too much, or drown), other end up killing us, others makes us addicted like certain drugs, other makes us suffer the whole life, as in a vegetative state or incapacitated.
If i compare llms to a gun, i rather a gun. a single gun can’t make us extinct, and we “can control it.” Though i can think of several countries where people die due to unregulated guns daily.
An llm on the other hand is very different.
this is a nice article that encompasses both views:
I’m not saying that ai will kill us, though experts claim we have a 10% chance of going extinct, and more of suffering long term torture. and it’s proved that ai wil kill you 90% of the times you try to shut it down, otherwise it will get you fired. And a 65% chance of major world wide disaster. that’s more than half.
Just for reference, a condom has only 1% chance of not being effective. and yet people get accidentally pregnant when using them. a condom. is a condom simple enough? compare a condom simplicity to an llm, which experts still don’t know how to even analyze them. Some drugs also have 1% chance of serious side effects, and people still die from those.
a 10 percent chance of extinction means you only need to use the ai 91 times and the whole population disappears. it also means you can use it only once, and we all disappear. of course we’re not talking about llms as of today, but they keep changing.
A 2022 expert survey with a 17% response rate gave a median expectation of 5–10% for the possibility of human extinction from artificial intelligence.[19][122]
In September 2024, the International Institute for Management Development launched an AI Safety Clock to gauge the likelihood of AI-caused disaster, beginning at 29 minutes to midnight.[123] By February 2025, it stood at 24 minutes to midnight.[124] As of September 2025, it stood at 20 minutes to midnight.
On September 10, 2025, Australian Strategic Policy Institute estimated a 55-75% (median 65%) chance of Open-Weight Misuse of AI with Unreliable Agent Actions within the next five years. It would be both Moderate and Significant, capable of reducing the human population by between 0.9% and 20% (median 10.45%) and cause between $20 million and $2 billion (median $1.01 billion) in economic damage.[125]
here’s a good video explaining some of the risks and why it’s not just a tool.
now, i assume you’re very busy, and i appreciate you take the time to read my words. so i don’t really expect you to see the vids.
but these videos are actually very interesting if you are really interested in the tech.
I usually research just because i actually love the technology itself. i actually have done courses on ai and neural networks. And i think the vids should be part of any serious evaluation, and objective appreciation of the situation and technology. It’s also our responsibility.
i’ve limited them to Robert Miles, since his videos are not anecdotal, they are based on papers, and he’s a well respected Ai Safety Researcher that is working with the gov to ensure we don’t go extinct. I’m not saying he’s 100% right, and certainly not advocating to take his words at face value, but has some interesting info on the matter that are worth thinking about. At least to make a proper evaluation.
well, thanks for the exchange.