AI is making big strides but should we be wary of bringing AI into our games?

AI is improving growing and gaining in ability to play games from Chess, Go to Atari 2600 classics.

Now this is all fun and grand but the very core of these AI’s ‘brains’ will be based on conflict and battle.

What happens when we build layers on top of these early AI’s and the core was built to fight and win?

Just look around at Humanity’s history and note how our brain is built up of layers with the core layer being based on the primitive reptilian brain and only the latest outer layer being higher level thought?

And what would happen if an AI that is brought up on GTA / Doom / Resident Evil eventually grows to Human level AI and beyond?

Maybe we need an updated remake of War Games, with modern game based AI.

And would you want to be trapped in VR with such an AI?

The concept is nice but not really based on reality.

In my field of study (computing and project management) we would call what your concerned with emergence.

Essentially we create multiple things with a purpose which is input a output b, when we merge many of these together to create complex networks there will always be some unexpected outcome that we didn’t foresee but this is not the case of the computer learning or adapting and despite the best articles and demos in the world I’m yet to see a computer remotely close to a human brain in terms of decision making ability.

To achieve that level we need to program modules that cover self preservation, risks analysis and essentially a morality. Yet any outcome is predictable.

Your reference to playing chess is great, it’s a game with rules and finite outcomes on any turn, the computer won’t effectively swipe the pieces of the table because this is an emotional response it will only take logical steps.

If we take terminator (always a great example) the machine would need to not only know if something exists but be able to analyse its value and given one advantage a computer has is to process information it would need to run billions of calculations for its actions which would breach any number of subroutines.

Could be build a computer to destroy the world, yes. Could we do it by accident? No

I think you don’t understand the concept of “AI” as currently available, nor the concepts of “agency” and “motivation” in what drives some thinking entity to take certain actions.

Let’s say some hedge fund builds an “AI” with the goal to “optimize the amount of money we make.”
The motivation wired into that “AI” would be the delta between buying low and selling high, or shorting high and covering low.
The “AI” would presumably also get a lot if signal inputs from the world – news feeds, weather, sales reports, tweet streams, and the rest.

What people are afraid of in this case is that the “AI” would somehow do three things:

  1. suddenly decide to move those input signals itself to maximize profit, instead of being reactive to the input signals (change motivation)
  2. suddenly extrapolate from its domain of knowledge (reacting to input signals) into a completely different, orthogonal domain (how to affect the things generating those signals)
  3. suddenly acquire some kind of hook-ups to the rest of the world such that it can understand the cause-and-effect between those hook-ups and the events that generate the input signals, and apply those hook-ups to make such changes (agency)

It takes human beings between 15 and 20 years to learn even the rudiments of how to get from 0) to 3), and we have a whole society whose only goal is to teach us to do just that.
How would an “AI” whose goal is to stay in a box and maximize profit for its owners, somehow make those jumps at all, much less on a time scale where it won’t be obvious what’s going on?

Work on more general AI, the type that could at least theoretically go rogue, is very different from everyday game AI. Even more importantly, each new AI do not retain the “memory” of earlier generations. Neither is conflict inherent in the underlying science nor is each new version just the last with additional stuff on top.

Being afraid about a future AI going rouge because of existing violent computer AI is like fearing your child will become violent because another one it will never meet has just learned to fight.

See an Arowx thread like this almost made me think I was on the Unity boards:):wink:

Comparing the difference in community responses!?

See for yourself -> https://forum.unity3d.com/threads/ai-is-making-big-strides-but-should-we-be-wary-of-bringing-ai-into-our-games.456996/

Kudos to Unreal’s forum for less knee jerk negativity!

What if donkeys can fly one day?

Would you be wanting to fly one?

Who wouldn’t?

Only if it breathes fire!

Too late. That has already happened :smiley:

History Lesson:

What if I told you the RAF cancelled an AI fighter pilot program because they were too good and the technology was based on Norns cute teachable evolvable game AI (Creatures circa 1998)

Norns used a mix of genetic and neural network technology to make trainable evolvable digital pets.

And they learned to fly fighter jets in a simulator, against real pilots.

So these guys

creatures.jpg

Defeated these guys

pilot-tsk-uc-085720QL.jpg

So do you think game AI could be misused and become a dangerous thing?