AI learns to write it's own code

I was actually trying to write something like this a while ago. It was too complicated though. So I abandoned the project:

If AI’s can write there own programs, they can almost adapt to different environments and challenges as they arise. Kind of scary actually. Part of the future though.
Great news for non-programmers though (Unreal Artists!)

That’s a terrible idea!

Soon it will rewrite itself and you no longer have any control over it :smiley:

This isn’t actually a new concept, it’s a form of evolutionary genetic algorithm.

Yeah this has been around for a while already.

One of my Computer Science professors once told us it was mathematically impossible for a computer program to write its own programs and showed us the mathematical “proof” using the tape-machine method (if it can’t be done on a tape machine, it can’t be done). Was that inaccurate or did I misunderstand?

That proof is important, but it is also not the end of the story. The proof assumes certain things (such as what kinds of inputs are available and the totality of the system versus external stimuly) that aren’t necessarily true for a system implemented in practice.

That being said, I think the main thing missing in all these AI stories are some kind of motivation for the AI to somehow make a leap from its designed-in sensors and goals, to suddenly learn how the rest of the world affects its current state, and jump to the decision to change its current state, and doing all this in a way that both survives and is not clearly obvious to its owners/operators. I just don’t see it happening in a very long time.

I see. Very interesting, thanks!

The true question is, will it dream?

It might be able to write its own code but how will it know what goal it should do or accomplish? Desirable results actually depend on what our own goals are, so if an AI doesn’t know how to create or conjure its own goals it can never determine a good result from a bad one.

Skynet we all are doomed. All it needs now is some test that tells if new code is better than old code.

Isnt that basically just procedural generation? Now if we could get an AI to make assets for games, design the maps, and program it that would be impressive.

The problem I have with your argument, Nawrot, is how the AI would define “better.”
I don’t see how an AI arrives at a definition of “better” that includes suddenly zero-daying external firewalls and taking over the world.

And that is exactly why nobody made self replicating AI yet. That is exactly type of problem that is easy to express in human language but impossible to code. We may have still century or two for all our human pettiness. I was a bit joyfully sarcastic there. However i think that we are not far from singularity (if true general purpose AI is possible).

Daisy … daisssy … dd … ai … sy … :slight_smile:

It might be better to think of this as an AI that can modularize code, rather than one that can write new code altogether, much less improve itself.

It’s also typical that we’d make an AI whose singular purpose is to commit copyright theft on a massive scale.

A lot of conventions that we’ve grown up with, may not necessarily apply for a future society.
Copyright doesn’t have to be a thing – there’s no natural law saying it should apply, or that it should apply for as long as it does now, or that it should apply for anything other than the original creative expression, or any of a vast variety of other variants.
Land rights don’t have to be a thing – the land doesn’t have “your name” written on it. Someone a long time ago decided to fence in some land that was un-owned, and it’s been passed along to whoever owns the land right now, but if you think about it, that’s not particularly fair or just to people born now.
Working for a living doesn’t have to be a thing – if robots and AI can really do 90% of all the labor, unskilled and skilled, there simply won’t be enough highly-skilled or creative jobs to keep everybody working for 8 hours a day. The only reason we think it’s bad that “robots are taking the jobs” is that our society assumes that the way we currently optimize for “the most good for the most people” has to always be the best way to do that.

So, one the AI runs things, expect everything else to have to change too. History says that Eastern Europe probably will adopt it first, and the US will be one of the laggards.