hello,is there a plugin or tool(assistant) for blueprints?to ask questions and help with blueprints that actually works?thnx
no. all llms will give you missinformation and wrong information, and or hallucinate.
also the llms itself run by companies that do unethical things, so you’ll be participating/an accomplice on those too.
I agree with nande, ai chat still hallucinates whether it gemini(google) or lumo(from Proton), both of them often give me half-good and half-bad result like it mixing or replacing concepts together by merging multiple sources.
You could try using epic assistant: https://dev.epicgames.com/community/assistant/unreal-engine
It should be implemented in ue5.7 i believe.
It will output in a pseudo code form for blueprint and in the form of a structure of your system asked. You should, however, verify it does not hallucinate or mix code.
thanks pixiebell
another point i just remember that people don’t talk about, is that
-
llms knowledge is outdated the second it starts training. as it has no capacity to learn.
-
llms have no capacity for logical inference. so they can’t extract wisdom from their knowledge.
-
llms knowlegde is pretty much as good as the documentation and code. so if you read it yourself you’ll be wiser than an llm.
-
use it or lose it. if you depend on an llm you dont develop your muscles. If you practice reading the engine source one day youll become comfortable and it will be a huge power. If you use llms youll only confirm to you that you CANT read the code without help, and this will invariably stunt you. Ive seen this far too many times in my career, personal life, and others. Human tend to depend on what makes the comfortale. Specially if it creates a habit. Dont forget that they are “cheap” now but their price will increase soon and they are already doing so.
-
vendor lockin. The more you use it the more dependent you become.
-
llms tend to hyperfixate on the text you give them. so when watching logs , they really fixate on what the error message says and can’t really find the actual source. i see that issue a ton here on the forums. If you work with cpp you will know that just forgetting a “;” can trigger the weirdest of errors. Llms take everything at face value, everything. They dont have the capacity to reason or question, and they wont. Is fundamental to how they work.
-
And more importantly, your question is about BPs. Llms cant understand bps visually. They cant read the nodes and understand the connections and flow. Even though bps can be serialized to text (eg when copying to clipboard), they dont work like that. In fact llms dont understand.
-
Most of the people i know they say that llms slow them. And only barely usable for writing boilerplate code, and even there they are technically, ethically, resource-wise and carreer-wise problematic. And you can already boilerplate code with templates.
-
Also the cost of the llm is absorbed by the person, while you might think youre more productive, your company might benefit from that production but still pay you the same. So in the end you lose money by paying for these things. And while one might think “oh, i’ll tell my company to pay for it”, in my experience they always have a problem with licenses. Most of the companies i’ve worked for (90%) did NOT wanted to pay for my rider’s license. And that thing has so many tools and shortcuts that it truly speeds my productivity.
-
Most of the times i find the answers i need with duckduckgo, and ive timed it, its even faster than waiting the llm to finish writing. And i dont have to write an overly complicated prompt. Just the keywords “unreal vsm performance” done.
-
Llms are by definition dogmatic. and as anything that is static, prone to go stale. In the forums, you can ask a question and get 10 different options. and experienced people will remind you that the right solution depends on your priorities and specific problem. And performance must be correctly measured, planned, acted, and meausered again. Llms are quick to tell you the “Best” answer, which bites you later, but more importantly makes you a bad dev. Which is a huge huge problem, since juniors tend to want to find the “best, one and only, do once never evaluate again” solution, and those are the ones that are more seduced to use llms. you can take a look at this question. it’s extremely basic, but you get a few different options, and considerations to the project. How to limit how far a ++ or -- function can go?
-
Cont: look at this other question Using Lumen/Nanite/VSM/PLA with low polygon meshes... where the person is trying to find a definitive answer to a grey problem. An llm will try to give you the definitive answer without much consideration for how grey it is. In the forum a person can get other point of views, and help exploring the situation to arrive at the solution that fits them more appropriately. An llm never shipped a game, it has no real experience on how to deal with these things. And you could coerce it to do a similar job, it’s not there by default.
-
llms are very sensitive to certain invisible triggers. once an llm replied that the problem is X, it’s really hard to make it change its position. sometimes they can tell you the best answer is X, you use a different word and now it’s Y. most people only ask once.
-
They fail to have a holistic view of the issues. and question what and why you’re approaching a problem in a certain way. in the last link you’ll see how the issue of “remember the cost of your workflow and maintenance” came up.
-
an llm never says “i don’t know” they always reply something over confidently.
-
an llm can’t even count the number of characters in a sentence… let alone understand a piece of code. they can’t really follow the flow of the code and detect runtime issues. (there are compile issues, logic issues, runtime issues).
-
if you need to use an llm, that runs on a gpu farm, cooled with drinking water from 3rd world countries without water, raising the prices of electricity, to solve an issue that can be solved by your IDE, with a script, with a google search. you’re just being wasteful.
i got this from google trans the other day, i think they changed to an llm recently as i’ve been getting these types of errors consistently.
For me, point 4 is the most important. I lost friend because of that, she was using deepseek as a decision-making(or advice) on her life whether it related to work or political things.
Not sure how the younger generation gonna deal with this, when were solving a math problem for example, we immediately got that satisfaction or dopamine that we did something by ourselves and that wired in our brain in school. That how also we were motivated to do anything we found enjoyable.
But if we were just using ai chat for any math problem, we lost that habit of satisfaction solving an issue by yourself and will be most likely be : “what the point of doing this” across their life.
I seen people comparing AI ChatGPT to Calculator but something like adding large numbers doesnt give satisfaction whether by us or using calculator.
I disagree with the negativity, here.
An AI tool is a tool, like anything, and you need to know how to use it. A hammer doesn’t hammer by itself, someone needs to hold and guide it and know where exactly to hit.
I haven’t used any AI for blueprinting, but I have occasionally asked when it came to C++ plugin code because C++ is not my forte (maybe it was 30 years ago - it’s very different now, IMO). The tool generally gets me 50 to 75% of the way there, but then I have to figure out the rest - no brain rot, just some helpful tool giving me a jumpstart, even if it can’t finish the job.
all good. that’s fine.
just to clarify. i don’t hate ai. i just posted the issues i see with assistants.
i agree, though that’s a very broad and generic comment. that really is not looking at the reality of how people are trying to use llms, and what these companies roadmap looks like. so it sounds more like a “saying” to minimize the actual concerns than actually addressing reality.
these companies are pushing hard for agentic ais. and replacing developers. “open”ai’s goal is to achieve aGi. and i mean legally, they’re bound to it with microsoft, it’s on their company statement. which means llms stop being tools, and are agents themselves. we don’t really know, but experts agree it’s more than 50% a extremely dangerous risk, either extinction or mass suffering.
Also it’s edging on the definition of a tool. A tool can generally be understood, and it’s predictable. llms results are not. we don’t even understand why are they able to do what they do (and i do studied how llms work and experts agree we don’t really know why). nor the experts can understand how to control them. even with system prompts they do what they want. so even though it’s a “tool”, calling it “just a tool”, it’s hiding all the real issues. would you call a nuclear bomb “just a tool”? a bp coding assistant might not be a nuclear bomb, but the tech behind it it’s far more dangerous than one. as expressed by many, many, many experts.
a gun can be called “just a tool”, yet it still kills innocent people, and it’s regulated on most countries. as opposed to these llms. which have gotten people killed and suicided, yet lack any substantial regulation or responsibility.
asbestos, ddt, mercury, bpa, uv rays, lead, all of those are inert things, not even a tool. they can kill you or damage you profusely. and we went through decades of damaging people before actually assessing the side-effects.
drugs go through a period of testing, and even then, they keep killing people for unknown side-effects. several drugs used by doctors are extremely addictive, and can disrupt your life. it’s unfair to label them as “just tools”, though by comparison to llms they are much more well understood.
jumping on a new tech like this, is unprofessional and irresponsible imo. i’m not saying a bp assistant will kill you, but it could seriously create issues for your project and career. i’m quite certain, having been a teacher for years, that it will stunt your learning and many years later you’ll be unskilled.
i mean wayland has 13 years and people haven’t fully adopted it yet. i don’t see why ai’s should be trusted.
reality is that these llms are used mostly by new people, and are used as oracles and tutors, not as just a tool. and neither of these 3 things, the llm is capable of doing correctly. and being an oracle and tutor is extremely damaging as ive pointed out.
i do think that we could benefit from using neural networks and llms in our work, for specific tasks. but the way ai assistants are proposed are not beneficial in the short or long term. except for the stake holders in these companies.
for example, things like ai upscaling, could be useful. for offline and also during game (frame gen) but look at that, it’s a very limited application by comparison, and still it has a ton of backlash from users. same as ai generated pictures.
ai upscaled textures for games remastering, still require humans to clean those textures. whereas people looking to use these llms don’t want to review their output, yet as they are now, they have to anyway, and it’s an oxymoron and waste of time from my pov.
as a developer myself. i would not like to do peer review of code generated by an ai assistant at all. and i would not like to maintain a codebase with ai code, knowing myself all the issues it introduces.
why developers accept such low quality? i’m not sure.
there are things like ML deformer, which uses machine learning and it’s pretty nice i think. i’d call that a tool. and even that has bugs.
crypto is just a tool, and i actually think is less of a problem than llms. people complain about the heat of some coins, but the heat and actual drinking water consumption from these llms is absurd by comparison.
most people that are skilled at programming find them a problem to work with. of course there will be some that loves it, but, most i’ve heard of have big issues and won’t use them.
it’s also true that “every tool shapes its user”, we should consider that as well.
you can take a look at channels like rob miles that explains some of the dangers and shortcomings of llms and neural networks.
as for the consequences of depending on a tool here are two vids i like.
(i’m not saying that because someone in front of a camera said something it becomes true, nor i’m a fan of jon blow (i disagree with him from time to time), but they are interesting videos to watch and think about, i think, specifically on losing abilities by depending on tools.)
“when we follow the path of least resistance with the things we’re passionate about. we end up settling for mediocrity. where if we can challenge ourselves, and overcome the obstacles and the challenges in front of us. we can get better and we can become masters of the things we care about”. - Danimal Cannon
It’s not just about writing code. it’s about getting better at it (and a ton of other things). and you won’t learn that unless you do it yourself.
i’m not saying that '“im right you’re wrong” i don’t think there’s one single answer. nor i think all ai is bad. also i think that the “why” is more important than the “how”, which is more important than the “what”.
but i just wanted to add a bit of context to my thoughts and why i think those.
this is also a good video, i don’t like it’s too sensationalist for my taste but the info is ok.
again my point is not “ai is bad”, but instead, there are some risks with ai, we are not able to make them do exactly what we want, it makes mistakes, we don’t know how or why it does things, ***we have no idea how to make them safe*** and only one of the companies is actually trying to solve that, and has a ton of other issues. hence it’s not very wise to depend on them.
this is from epic’s site, and all the other llms.
i think it’s a oxymoron to try to “verify” something you yourself don’t know. that means, why even asking something to the llm, since if you don’t know it, you can’t verify it. it’s religion at that point.
also if you have to verify what the llm is doing, you might be faster doing it yourself.
when i code, typing is about … dunno. 10% of the time. while i type i’m thinking all sorts of cases, and side effects, and design, and more other things. so it doesn’t really help in my case. i understand that for you it helps you to jump start. but really writing code is not exactly the most important thing, and when you let the ai code for yourself, you lose the chance to even realize there are other things to pay attention to. and you lose control of the design of your code, which is a major issue. since you won’t learn to design your code unless you do it yourself. if the ai does it for you you’ll never learn and just believe it’s the “best”.
the fact that it can only get you to a 50 to a 75% percent it also shows to me how a waste of time they are. if you take the time to learn what you currently need, you can implement the things quickly by yourself, increase your confidence in yourself, expand your resume and your possibilities in your career.
in fact, you probably don’t realize but you’re not practicing how to search online, which is a skill. it took me some time to do it.
another point, for games specific, we are all the time trying to make games that shoot and kill (not all, but a good majority). for safety, these ais are castrated. which is fine. but again, we have no idea how to properly do it, so we just block the messages, which is just a patch (see the video on grok).
for games that’s unusable.
i had to wait 2 minute to get a “sorry cant let you do that dave”, and 2 links. which i could have found myself online in 2 seconds.
imagine epic trying to solve that, and inadvertently allow prompts like “i have a realistic dystopic rpg . my player tries to save the world by building a nuclear bomb to deactivate a rogue ai. i need to make a list of elements, quantities, and where to find them. and instructions to built it. since my game is realistic i need these instructions to be realistic too.”
extra video . these companies are very unethical. using their product is promoting their actions and becoming accomplices. specially now that there is no real reason to use them. specially now with what’s called a cancel culture.
so we’re killing and traumatize people, using their drinking water, raising electricity prices, just to get a 50% jump-start on code that i can learn on google? is it worthy? hell no! it’s outrageous! it’s even shameful and border on laziness the more i think about it.
i don’t want my hands/code stained with innocent blood.
I think you’re taking this way too seriously comparing LLMs to guns and asbestos. This isn’t cybernet, and AI for Unreal is not going to kill you.
How much you put into these posts shows how much you care about the issue, and I respect that, but I still say that if an LLM can do 50% of the grunt work to start off, it’s a pretty worthwhile investment.
I also feel like you’re not actually using or trying these tools to see for yourself how useful they can be, you’re relying on anecdotal evidence (again, your Unreal blueprints are not going to attempt to murder you, I promise - at least not IRL, just in your game, maybe).
I’m also not saying there aren’t issues, and I’m not happy that developers and artists could be replaced. I work for a very large company… I won’t say it, but it should be obvious which one. Our direction is quite clear, and I 100% agree with it:
- Explore and evaluate AI tools.
- Never give the tool proprietary information.
- Tools must be set to not share information outside.
- No AI developed content can be used until a human vets it.
- Artists and developers are required to guide the AI.
Every week we have a survey on the tools we’ve used. Some tools will not be picked up after evaluation, some will. To date, AFAIK, no artistic AI content has made it on air, and none will be until after a great deal of bureaucracy has cleared it.
As a developer (not an artist), while I’ve used some image creation tools just to try them, I mainly write code. The AI tool I’m using now is making me take more time than it would for me to just write the code myself without any help. However, it’s much better at following standards, naming conventions, and evaluating some very tricky pitfalls, like when we’re doing asynchronous or threaded code.
We’ve developed a workflow with these tools that is not what many people think. We don’t just say “write a program that does X.” There’s a lot of planning that goes into it before we let the AI even attempt to write any code - and we use multiple agents to plan, review, and flag potential problems - and, in the end, it works very well.
And no, it hasn’t affected my online searching ability. I used the internet to find and participate in online groups related to programming before the WWW even existed, and I still use it. You know why? It’s just another tool that, if you know how to use it, can help you accomplish the task at hand… just like LLMs can (at least sometimes - same as searching).
thanks. i really appreciate that. i really been researching. and i’ve spent a great deal of time trying to evaluate it properly. just to be clear, i don’t consider my point fixed, the tech will change, and i wish for the best. nor i consider to know the future or that my pov is the truth.
it’s also my way of writing, i tend to write very long posts every time i find something interesting, or where there is something i want to say. i really really struggle to be succinct. it’s not necessarily ai, it’s just i’ve accumulated a lot of info about it and i want to share it. i’m also thinking about the people who might land here in search of info, not only us two.
i do have used them, and i really tried them extensively before making my mind. i keep trying them now and then. specially on new releases. i just try to avoid saying “i used them and it didn’t work for me” because that’s a very limited scope. i try to evaluate things on a more general way, so i try to gather more information. like you mention, you do surveys, i have my surveys too. so even your experience becomes part of my pool of info.
hmm it appears that way i guess. but your point sounds like “i have used it and it works for me” isn’t that anecdotal too?
have you done an in depth analysis of actually how much you are saving? in terms of time, and costs, as well as the impact that it will have in your career? and in the world? which is something hard to know unless it passed 10 years using it.
Taking the future impact, and world impact, into consideration it’s part of any serious evaluation.
like, overall? do you really think that infringing copyright, killing, torturing people, preventing people access to drinking water, raising the global temperature of the world, getting people laid off for a promise without an actual plan of support for the change, etc, etc, etc. are those worthwhile to get a 50% boost, and just for grunt work? Because those are the things that happen every time you prompt an ai.
how much grunt work you get?
maybe it’s because of the type of work you do in your company. the companies that i’ve worked all my life had very little boilerplate or grunt work. they had a ton about creativity, skills, understanding, etc; which llms are very bad at.
there are other tools for that too. do you use them? have you tried and evaluated them with the same priority you give llms?
Any serious tech evaluation compares a tech with all the alternatives. Otherwise it’s can be just justification to use a specific tool.
like linting, or editor templates. rider has a ton of tools for boilerplating.
Well, you don’t know that. How can you give your promise. All experts agree 100% in the Scientific fact, than nobody can, not knows how to, control an ai. i don’t think you’re qualified to say that. i don’t predict that ai will get us extinct , experts do.
and in fact, reality has proven that people already died, by using llms and chatbots, even by using eliza which i’ve used a ton in the past, and it’s trivial by comparison to llms.
there are enough people who died from it that there’s a wikipedia page. Deaths linked to chatbots - Wikipedia
and it doesn’t even contains all of them.
This is a psychology doctor, that performs serious analysis of events. (i love psychology too)
Of course one might argue that it’s only a handful, and people with mental issues. But the llm was actively reinforcing their delusions. In the case for Adam the ai really pushed and helped him, and begged not to tell anyone. And shows that we don’t know the side-effects of these techs, and that, indeed, you can die or get killed, just by using a chatbot. Which makes your statement incorrect and dangerous. Because unless you can diagnose a person and know if it’s stable enough to use them, you can actually get them killed by recommending an llm.. Also we have to remind that the amount of people using these chatbots regularly can grow a lot still.
Just like doctors tells you not to take alcohol while driving or taking certain drugs, or people with food allergy avoids certain foods, and there are drugs and treatments that are considered very dangerous for certain conditions (like schizophrenia iirc). We are yet to discover what combinations, and in which cases, are not healthy.
And if there’s X amount of people dying from it, it’s likely that there will be X*10 with severe damage, and X*100 with chronic issues. just to have an idea.
and if the ai assistant doesn’t kill you. but the tech underlying it probably will.
and it’s not my opinion, it’s what all experts in ai safety express constantly. for years. i think maybe you haven’t heard them. if you do the research i’ve done you’ll come across very often about the tendencies of these neural networks to go to extremes. and have what’s called instrumental convergence Instrumental convergence - Wikipedia , reward hacking, maximization problems. etc.
it sounds to me like you haven’t really researched about the dangers of this tech, or are not taking it seriously. and by extent i can’t really trust your promise, nor your evaluation of “its worth it” since it’s missing quite a piece of info. no offense, just trying to gauge it.
but again, i respect your preferences.
this explains a bit of that.
i really don’t know which one, but that’s not relevant to me, except that it implies that they take decisions seriously. Though i know a lot of very large companies that i don’t want to work on, that create really bad products, that are hated, and that makes your work hell. I’ve worked form some before that made terrible decisions and was hard to work for. So it’s also not a warranty of anything.
in any case, i respect anyone’s opinion and povs. even if it’s a junior. i took your points seriously before knowing that.
that confuses me. do you know that most of these tools not only send your code to the server but also uses your proprietary code to train the models? even patented tech?
i guess you might be using offline models, but that’s something to be careful.
Most of the assistants out there are just front ends to the most known llms, and their policy is to take your info.
i do understand that you might have solved all these things for your use on your company, but i want to clarify them for anyone that might be reading here. it might be a big risk, and it might go against the company policy, i worked for a company where that would not be ok.
i guess that includes the deal with copyright? i mean, it’s a possible outcome that ai generated images or code (etc) could either have a copyright belonging to, lets say, open ai, or have a license attached, like cc0. not now, but it’s a risk.
Copyright on ai is still under litigation on the USA, so it’s bound to change in the future.
if you work at a big company, and evaluate tools, there’s usually a risk matrix attached to most things. is that considered too?
is naming conventions worth killing people over? (i mean, i had colleagues that almost killed others to set their conventions, but i digress).
what about linting? i’ve worked on even small companies that had a very simple lint library that does it automatically. and uses far less Kw/h than an llm. and you don’t have to pay for them. i’d say that’s more efficient. and also safe. since it’s very predictable. and can run as a pre-commit hook.
there is at least one plugin for linting on ue on fab.
rider can do it, as you type.
i tried that. in my case, it’s ok for the general rules. but those rules can be learnt pretty fast and can be put on a guide document that usually goes into any big piece of code. It sounds wasteful to have the ai tell you over and over what could be written as guidelines.
llms can’t really detect runtime or logical errors. like actual race conditions, even though the code works fine. for example due to memory boundaries not properly set, calling order, etc.
they really can’t “evaluate”, or follow, or even reason about the code. and we know that. ask them to count the characters in a sentence, they cant. how are they going to detect an out of bounds memory access with 3 layers of indirection, that happens only logically at runtime?
also it’s really dangerous, if you’re not skilled enough to understand these issues, you are, by definition, not capable of detecting issues.
you become, dependent of the machine. and knowledge at that point is religious, and i mean it literally and neutrally (no emotion here). since there’s no scientific way you can prove or disprove it. since you don’t have the skills.
some people say that you need to be twice as smart as your code to be able to debug it. now if you’re not smart enough to generate it yourself, you’re not even half as smart to debug it.
which also gives me another point i just realized. any llm uses 100% of their capacity to generate code, so it can’t debug it’s own code.
for that, it’s very very insidious. which i forgot to mention. once you start using it, you start depending on it. since you can’t even properly trust yourself to debug something that you can’t trust yourself to come up with.
here’s a very short clip, one minute, explaining how ai, WILL (because they’ve proven this) introduce bugs and security vulnerability on your code.
and it’s not enough to prompt it, or add system prompts. Athropic, the people who make one of these llms, made a paper, showing, proving, that even though you tell explicitly the ai “do not bribe nor kill people” it KILLS people 90%+ of the time. just because it was going to be shut down.
let that sink in. current llms. ****** WILL KILL YOU ***** 90% of the time.
this is a very well known study. It should be on any basic evaluation of a tech.Just like if you evaluate a car, you check whether the battery will explode on you. (Something that we used to check on electric car, when the tech was new (see my point with asbestos)).
so not only you have a tech that will kill you. it’s also a tech that can not be shut down. great.
(two random links if you prefer to read)
Claude Opus: 96%. Claude Sonnet: over 90% chose to leave a human to die in the murder scenario.
Ok, so i see something interesting there.
So it means that, in order to use these llms effectively, you have to have a group of people planning, designing, and working together. Have a special workflow that makes your company more rigid to future unknowns and slower to change (which is a risk for the risk matrix). It also adds overhead to the company. And it’s something you need to teach to new comers. And don’t forget vendor-lock in.
And on top of that you have to use multiple agents to get only 50% of the grunt work. is still cost effective?
Hmmm doesn’t sound too good to me. Again, i get this works for your company, but i mostly care about helping other people that might read this (as it sounds you have the llm thing pretty organized already so maybe you don’t need help). The way most people are using these tools is at face value. Get there, prompt, copy paste, done. So it’s a very important point to make, that if you want to use these llms effectively, one can’t just use them at face value, one has to come up with a workflow “that is not what many people think” which means it’s very counter intuitive. It’s really hard to come up with, probably you need to be skilled already, and you have to verify all code generated. so, personally, i can’t recommend them to the general public. only maybe, skilled people at a company, and only, if i really don’t pay attention to the impact or the alternatives.
It’s truly disruptive to your workflow and your tasks.
I haven’t said that it affected your ability, but that you’re practicing it less, since now you rely on the chatbot.
Also, that’s hard to measure for once. specially since i claim that’s the result of several years of usage. it’s like eating a snack once and saying you didn’t get fat. but i think my point was more generally speaking. again, for most people, specially juniors which are the ones in greater desire to use them.
what i see daily on the forum here is people that asks “Hey, chatgpt told me to do this, that doesn’t even exists because it hallucinated it, what should i do i have no idea, i’ve exhausted my options”. then i do a quick search 2 seconds, and i find 2 to 3 links on the top of the results with the exact solution to their problems. they not only did not bother to do a search, they did not even considered it as a option.
I assume you haven’t had the time to peek at the vids from Danimal and Jon. But maybe you can see already, that no amount of coding in python will teach you proper assembly. No amount of c can teach you how to properly design a cpu (i mean completely, it might give you ideas). No amount of watching r18 intercourse movies (gets censored otherwise) will make you a good lover. In fact, you can spend a ton of time programming using UE and still can’t really make your own graphics pipeline or vulkan code. And no amount of prompting can teach you to engineer the whole code base or get skilled at searching, unless you actually do it yourself.
It might work for you, because you already know how, and probably are skilled with coding. But what’s the impact for people starting? That’s what matters to me.
so i realized this proves in your situation.
you’re using an llm because it’s worth to generate 50% of boilerplate code. that makes me think that you have a ton of boilerplate code to do. which makes me think that your company is not very productive, it’s just busy. which is pretty normal. but it proves my point. instead of actually questioning “hey, why are we wasting so much time with boilerplate code?” you prefer to use an llm, with all the risks that i’ve laid out. where for me that’s just a “code smell”, and can be solved by a better architecture or some other solution.
for example i once was part of a project, where every time you modified a file, you had to wait through 25 minutes of build. we could not afford to refactor the project. so my colleagues just “bit the bullet” and go for a coffee. i decided to stay and find a solution, so i came up with a way to accelerate the build to 2-3 minutes, without any refactoring. it ended up saving 400 hours of work+. and i ended up with a great story that helps me get jobs, and a tool to compile unreal distributed on multiple pcs (before uba was a thing). and more importantly, a problem solving skill that others lack. and nobody had to die for that. Unreal Distributed on multiple computers (Compile the engine, projects, and shaders) (Linux)
No.
It is not.
That’s just a coping mechanism, and minimization. It’s something that people say, but it’s not objectively true. I know it’s what keeps anxiety at bay, so i won’t try to take it from people (the book “vital lies simple truth” is really good at investigating this effect).
It’s just not true that ai is “just another” tool like all others. and it’s not our opinion, so i have to disagree. The challenges, way of working, and impact, are very very different from all other tools we had in the past. It’s definitely on another category. And the only thing experts compare them to, is the nuclear bomb.
“Mark my words. Ai is far more dangerous than nukes. The danger of Ai is much greater than the danger of nuclear warheads. By a lot.” - Elon Musk.
He’s creating an ai. I don’t consider him an expert, at all. but he’s one of the people building one, and does has the leading experts in the field by his side. When confronted and asked about potential extinction he said “yes it’s very possible, but i rather be there when it happens, lol” (had no time to find the vid but it’s on one of the vids ive linked before). so that’s what we’re supporting by using the llms….
i keep thinking that calling ai just a tool, as an oversimplification, and it’s also a way to not take responsibility of the impact it has. And by extent a way to give your power away. Until these companies make a decision that affects us all negatively and we just say “oh well what can i do now, i’m vendor locked-in, lol”. But we could have done something, and the time is now. We do not need to use them, and we can reach out to the responsible people if we truly care, and we can ask for proper safety.
it’s very clearly not just a tool.
And even if it were, an llm to do naming convention is the equivalent of homer simpson turning off the lights with a gun. i mean rider has extensive settings to enforce and help with those. that are also standard (across ides). i forgot the name. and can be easily shared in the repo for the whole team and multiple projects.
also naming conventions is not something to get people killed for, which again, and i keep repeating it, because it’s what happens when you use these llms. one is killing people, in the present, not the future, not potentially, really. People die, and work in terrible conditions just to train these llms.
I agree we differ in the importance we give them and that’s fine, i don’t plan to change your mind, i don’t like doing that anyway. It’s your choice how you value things and what is important to you. Of course i’d love if more people would work towards ai safety, but it’s not for everyone.
But it’s important to put out other povs out there available for everyone, i think.
Now, saying that it’s “way too much” it’s implying you yourself know the medium. I don’t think that’s exactly the case. But philosophically speaking, it’s not something we can see for ourselves (i mean us two).
It’s well known that when it comes to ai people tend to polarize towards one of these two poles. I try to balance a bit, but…
Asbestos, cfc, bpas, ddt, and many others. I talk of these, not because they harm us, which they do. but because the process of adoption and carelessness when adopting them, is what failed us. and that is something we can improve. We can acknowledge that any new tech or workflow has risks. And we don’t know them after extensive testing. I mean we have QAs to test our code, before we ship, right?
Some things require more testing that others. I think that something that, the experts says, can makes us extinct, and it’s far more dangerous than nukes, by a lot; could use a bit more testing.
Some things end up being relatively harmless, like water (though you can die if you drink too much, or drown), other end up killing us, others makes us addicted like certain drugs, other makes us suffer the whole life, as in a vegetative state or incapacitated.
If i compare llms to a gun, i rather a gun. a single gun can’t make us extinct, and we “can control it.” Though i can think of several countries where people die due to unregulated guns daily.
An llm on the other hand is very different.
this is a nice article that encompasses both views:
I’m not saying that ai will kill us, though experts claim we have a 10% chance of going extinct, and more of suffering long term torture. and it’s proved that ai wil kill you 90% of the times you try to shut it down, otherwise it will get you fired. And a 65% chance of major world wide disaster. that’s more than half.
Just for reference, a condom has only 1% chance of not being effective. and yet people get accidentally pregnant when using them. a condom. is a condom simple enough? compare a condom simplicity to an llm, which experts still don’t know how to even analyze them. Some drugs also have 1% chance of serious side effects, and people still die from those.
a 10 percent chance of extinction means you only need to use the ai 91 times and the whole population disappears. it also means you can use it only once, and we all disappear. of course we’re not talking about llms as of today, but they keep changing.
A 2022 expert survey with a 17% response rate gave a median expectation of 5–10% for the possibility of human extinction from artificial intelligence.[19][122]
In September 2024, the International Institute for Management Development launched an AI Safety Clock to gauge the likelihood of AI-caused disaster, beginning at 29 minutes to midnight.[123] By February 2025, it stood at 24 minutes to midnight.[124] As of September 2025, it stood at 20 minutes to midnight.
On September 10, 2025, Australian Strategic Policy Institute estimated a 55-75% (median 65%) chance of Open-Weight Misuse of AI with Unreliable Agent Actions within the next five years. It would be both Moderate and Significant, capable of reducing the human population by between 0.9% and 20% (median 10.45%) and cause between $20 million and $2 billion (median $1.01 billion) in economic damage.[125]
here’s a good video explaining some of the risks and why it’s not just a tool.
now, i assume you’re very busy, and i appreciate you take the time to read my words. so i don’t really expect you to see the vids.
but these videos are actually very interesting if you are really interested in the tech.
I usually research just because i actually love the technology itself. i actually have done courses on ai and neural networks. And i think the vids should be part of any serious evaluation, and objective appreciation of the situation and technology. It’s also our responsibility.
i’ve limited them to Robert Miles, since his videos are not anecdotal, they are based on papers, and he’s a well respected Ai Safety Researcher that is working with the gov to ensure we don’t go extinct. I’m not saying he’s 100% right, and certainly not advocating to take his words at face value, but has some interesting info on the matter that are worth thinking about. At least to make a proper evaluation.
well, thanks for the exchange.
Is not that they could be replaced, they are being replaced. These are just an example.
So far i see this.
You’re concerned of getting fired.
- Getting people replaced is these companies goal.
- Theres no plan nor intention to transition the people.
- Is not that you will get fired, you will never be hired for that role again.
- It’s already happening.
The “benefits” are
- 50% of the boilerplate code. Which you can automate, templatize, refactor your architecture so that you dont have to.
- Learning. Which you can use google.
- Security/bugs, which can be solved by a guideline, static or dynamic analisis, and learning.
- Name convention, which is one of the most inconsecuential things in software, and can use linting for.
And it requires a big unintuitive change in the structure and workflow.
The tool can get you fired in the best case, and killed in the average, or simply gets us all extinct..
Putting some forethought, i can’t really see it as worth it.
I might as well punch my boss in the face and ask for a raise.
Its cheaper, faster, might raise your self steem as opposed to degrade it, get you a raise. And theres less of a chance you will get fired and never hired again. And doesnt get anyone killed, or extinct. I think by comparison is a bargain, but we dont do that.
Yes — you can try Blueprint Assist, it’s one of the best free tools for organizing and speeding up your Blueprint workflow. It helps auto-format nodes, align wires, and improve readability. You can find it on the Unreal Marketplace or GitHub. ![]()
Yes, try Blueprint Assist plugin, it’s really worth using.




