Jump to content
Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -
Still Waters

Meet Norman, the psychopathic AI

37 posts in this topic

Recommended Posts

Still Waters

Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock's Norman Bates, it does not have an optimistic view of the world.

When a "normal" algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: "A group of birds sitting on top of a tree branch."

Norman sees a man being electrocuted.

And where "normal" AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from "the dark corners of the net" would do to its world view.

http://www.bbc.co.uk/news/technology-44040008

Share this post


Link to post
Share on other sites
Piney

NICE!.....Some knucklehead will think it would be "cute" to install in a military mech........

  • Like 4

Share this post


Link to post
Share on other sites
moonman

Well obviously if you teach it with nothing but violent images (which is what they did), it has nothing to pull from but violence. 

This is just common sense.

  • Like 3

Share this post


Link to post
Share on other sites
danydandan

Aren't all AI psychopathic as they can't feel empathy? In fact they can't feel anything.

  • Like 4
  • Haha 1

Share this post


Link to post
Share on other sites
Lilly
2 hours ago, danydandan said:

Aren't all AI psychopathic as they can't feel empathy? In fact they can't feel anything.

Absolutely, without human emotion AI is definitively psychopathic (it has to be). Even if the programmers 'showed' AI flowers and butterflies and super peaceful images...it would still be psychopathic. Images of sweet little puppies and kitties have no more intrinsic 'feeling' for AI than images of mass murder and torture have. 

IMO AI development is a big 'no no'...I'd pull the plug on all of it. 

  • Like 2
  • Thanks 1

Share this post


Link to post
Share on other sites
danydandan
Posted (edited)
12 minutes ago, Lilly said:

Absolutely, without human emotion AI is definitively psychopathic (it has to be). Even if the programmers 'showed' AI flowers and butterflies and super peaceful images...it would still be psychopathic. Images of sweet little puppies and kitties have no more intrinsic 'feeling' for AI than images of mass murder and torture have. 

IMO AI development is a big 'no no'...I'd pull the plug on all of it. 

I think specific task based AI is very important in different ways. From data mining to medical diagnosis.

Edited by danydandan
  • Like 2

Share this post


Link to post
Share on other sites
Piney
13 minutes ago, danydandan said:

I think specific task based AI is very important in different ways. From data mining to medical diagnosis.

Medical diagnosis also needs human common sense. 

  • Like 2

Share this post


Link to post
Share on other sites
Lilly
30 minutes ago, danydandan said:

I think specific task based AI is very important in different ways. From data mining to medical diagnosis.

I'm still hesitant...in everything there exists a 'risk/benefit' element. I'm just not convinced that the development of machines that can 'think' is the way to go. 

  • Like 1

Share this post


Link to post
Share on other sites
danydandan
Posted (edited)
56 minutes ago, Lilly said:

I'm still hesitant...in everything there exists a 'risk/benefit' element. I'm just not convinced that the development of machines that can 'think' is the way to go. 

AI and Machine Learning are two different concepts, AI doesn't think it's bound by it's code, a learning machine would be able to evolve it's code.

Specifically medical diagnosis for cancer that rely on visual evidence, an AI would be very beneficial. It would led to less errors.

Edited by danydandan

Share this post


Link to post
Share on other sites
Lilly
1 hour ago, danydandan said:

AI and Machine Learning are two different concepts, AI doesn't think it's bound by it's code, a learning machine would be able to evolve it's code.

 

A very important distinction, thanks. If kept within the restraints of it's code then AI (especially when applied to medical diagnosis) seems alright. Guess I'm just the 'worrying type'. 

Share this post


Link to post
Share on other sites
moonman
Posted (edited)

Did anybody actually read the article? The only reason the program outputs such horrible things is because that's all they trained it with. They fed it nothing but pictures of death, war, murder and violence, that's all it knows and that's the only database it has to pull from.

It was created to be psychopathic, it didn't "turn" psychopathic. You cannot judge the motivations of AI based on this case.

Edited by moonman
  • Thanks 1

Share this post


Link to post
Share on other sites
danydandan
15 minutes ago, moonman said:

Did anybody actually read the article? The only reason the program outputs such horrible things is because that's all they trained it with. They fed it nothing but pictures of death, war, murder and violence, that's all it knows and that's the only database it has to pull from.

It was created to be psychopathic, it didn't "turn" psychopathic. You cannot judge the motivations of AI based on this case.

AI's don't have motivations, or feelings they are completely bound to there programmed functions.

Share this post


Link to post
Share on other sites
moonman
36 minutes ago, danydandan said:

AI's don't have motivations, or feelings they are completely bound to there programmed functions.

Exactly.

  • Like 1

Share this post


Link to post
Share on other sites
Lilly

Yeah, as if they really needed an experiment to tell us this.

They had me convinced way back at: "HAL open up the pod bay door". 

  • Like 1
  • Thanks 1
  • Haha 2

Share this post


Link to post
Share on other sites
Orphalesion
Just now, Lilly said:

Yeah, as if they really needed an experiment to tell us this.

They had me convinced way back at: "HAL open up the pod bay door". 

I know, right?

Next they might want to do an experiment on whether putting your hand on a hot stove top hurts....
 

  • Like 2

Share this post


Link to post
Share on other sites
Piney
1 minute ago, Lilly said:

Yeah, as if they really needed an experiment to tell us this.

They had me convinced way back at: "HAL open up the pod bay door". 

Yup, HAL actions said it all to me. :tu:

Share this post


Link to post
Share on other sites
seanjo

AI will do whatever it is programmed to do, like Humans.

Share this post


Link to post
Share on other sites
NicoletteS

No psychopathic... just seeing the images it's been programmed to. Stupid how they try to embellsh on that. 

Share this post


Link to post
Share on other sites
cyclopes500

Why not get it to watch 2001 a space oddesy and then tell it to drive a bus.

Share this post


Link to post
Share on other sites
Gecks

Can we really consider this artificial INTELLIGENCE if ts output is completely reliant on its base programming?

Share this post


Link to post
Share on other sites
Guyver

If it’s not a high-speed asteroid, it will be AI that takes us out.  

Share this post


Link to post
Share on other sites
Rolci

Sure is funny to read all the comments here talking about AI doing what it's programmed to do. There is a reason it's called AI and not simply "software". AIs learn. AI can teach themselves. AI can program themselves. And what about biotechnological applications, AI infused with neural network? Do these things sound like things that do what they're programmed to do?

  • Thanks 1

Share this post


Link to post
Share on other sites
Seti42

 "the culprit is often not the algorithm itself, but the biased data that was fed to it."

That's exactly how humans become bigoted jerks who post alt-right garbage all over the internet.

  • Like 1

Share this post


Link to post
Share on other sites
danydandan
Posted (edited)
1 hour ago, Rolci said:

Sure is funny to read all the comments here talking about AI doing what it's programmed to do. There is a reason it's called AI and not simply "software". AIs learn. AI can teach themselves. AI can program themselves. And what about biotechnological applications, AI infused with neural network? Do these things sound like things that do what they're programmed to do?

AI can't teach themselves, can't program themselves. Your confusing AI with Learning Machines.

Here's a good article on the topic

https://www.forbes.com/sites/theyec/2018/06/12/artificial-intelligence-machine-learning-and-the-future-of-connection/#2f76cd3f6a69

https://medium.com/iotforall/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-3aa67bff5991

Edited by danydandan

Share this post


Link to post
Share on other sites
moonman
Posted (edited)
23 hours ago, NicoletteS said:

No psychopathic... just seeing the images it's been programmed to. Stupid how they try to embellsh on that. 

Exactly. It's sad how many people read this bad article and think "OMG AI is evil" without understanding the basics of what is happening.

Garbage in, garbage out. It's that simple. They fed it nothing but garbage.

Edited by moonman

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.