Jump to content
Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -

Robots shut down after creating own language


seeder

Recommended Posts

7 minutes ago, Timonthy said:

What reason would there be to be afraid? They're trying to make money and their programming wasn't working...

That's like manufacturing a product, finding a fatal flaw, but continuing without fixing it, knowing that you won't be able to sell it.

It was working. They just forgot to tell it they wanted to be able to understand the negotiations.

  • Like 1
Link to comment
Share on other sites

Hang on, but why would they be scared?

21 minutes ago, ChaosRose said:

A likely story. Lol. 

Why is it unikely? Why would they be scared rather than the programs not doing the work required?

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Timonthy said:

Hang on, but why would they be scared?

Why is it unikely? Why would they be scared rather than the programs not doing the work required?

Again...they were doing the work required. They were completing negotiations. It says so right in the article. 

The problem was that they forgot to tell the AI they wanted to understand the negotiations. 

You may not be creeped out by the fact that AI created its own code that the humans did not understand, but that makes anything possible, doesn't it? 

Once you don't know what the AI is saying, it could be saying anything. 

Edited by ChaosRose
  • Like 1
Link to comment
Share on other sites

Kind of ironic since Musk and Zuck have been sparring over AI, now this. Guess Zuck got a bit of a wakeup call.

Edited by WoIverine
  • Like 1
Link to comment
Share on other sites

4 minutes ago, ChaosRose said:

Again...they were doing the work required. They were completing negotiations. It says so right in the article. 

The problem was that they forgot to tell the AI they wanted to understand the negotiations. 

You may not be creeped out by the fact that AI created its own code that the humans did not understand, but that makes anything possible, doesn't it? 

Well I haven't read the paper, and must have missed it in the article; do we know if they forgot/overlooked it, or if this was just an iteration of the program that they wanted to test  allowing then to develop shorthand if they wanted?

I think it's pretty cool actually. And we don't understand that conversation, but had they let it run then who knows if they would have been able to decypher it. 

Yeah anything is possible, give them access to the internet and some highly advanced programming, teach them a bit about code breaking/hacking and other sinister stuff, and then it would be truly scary to see what happens.

  • Like 3
Link to comment
Share on other sites

2 minutes ago, Timonthy said:

Well I haven't read the paper, and must have missed it in the article; do we know if they forgot/overlooked it, or if this was just an iteration of the program that they wanted to test  allowing then to develop shorthand if they wanted?

I think it's pretty cool actually. And we don't understand that conversation, but had they let it run then who knows if they would have been able to decypher it. 

Yeah anything is possible, give them access to the internet and some highly advanced programming, teach them a bit about code breaking/hacking and other sinister stuff, and then it would be truly scary to see what happens.

The article states this...

The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own "shorthand", according to researchers.

The company chose to shut down the chats because "our interest was having bots who could talk to people", researcher Mike Lewis told FastCo. 

So I think it was an error on their part for not making it clear they wanted to be able to understand the negotiations. 

  • Like 2
Link to comment
Share on other sites

1 hour ago, seanjo said:

No, no it really hasn't...

Access granted...

 

  • Like 2
Link to comment
Share on other sites

1 hour ago, seanjo said:

I have devised a couple of ways to shut down a malfunctioning AI computer, one is pull the fricking plug out, two is more radical and involves a sledgehammer...

 

 

A sledgehammer probably won't even destroy your hard drive. 

  • Like 1
Link to comment
Share on other sites

Don't you think smart AI would have duplicated itself all over the web before the average person realized they had better pull the plug?

  • Like 1
Link to comment
Share on other sites

42

  • Like 1
Link to comment
Share on other sites

AI killing us all off? we need to program a super failsafe type device that only allows it to kill off the leach people. then and only then will i think robots are useful.

Link to comment
Share on other sites

I thought this was funny and thought it might might be fake and it is 

 

Edited by The Silver Thong
Link to comment
Share on other sites

Ummm, reading the writing on the wall, I think it's obvious what they are talking about.  Although, I'm not sure the rules would allow it to be said here, I think it's obvious what they're saying.  Alice says she doesn't need those things, they mean nothing to her, and Bob says he can give her everything else.  Quite obvious, to stop beating around the bush about it.

Link to comment
Share on other sites

"invents its own language" oh gee whiz. I say "hog Wash".... I know, know, know what that means. The hogs need to be washed. FB just looking for a little free advertising.

  • Like 2
Link to comment
Share on other sites

10 minutes ago, UFOwatcher said:

"invents its own language"

Thats FB language. Translation: "The IT guys screwed it up!"

Link to comment
Share on other sites

From the BBC - The 'creepy Facebook AI' story that captivated the media

Quote

But Facebook's system was being used for research, not public-facing applications, and it was shut down because it was doing something the team wasn't interested in studying - not because they thought they had stumbled on an existential threat to mankind.

It's important to remember, too, that chatbots in general are very difficult to develop.

http://www.bbc.co.uk/news/technology-40790258

 

  • Like 3
Link to comment
Share on other sites

6 hours ago, ChaosRose said:

Again...they were doing the work required. They were completing negotiations. It says so right in the article. 

The problem was that they forgot to tell the AI they wanted to understand the negotiations. 

You may not be creeped out by the fact that AI created its own code that the humans did not understand, but that makes anything possible, doesn't it? 

Once you don't know what the AI is saying, it could be saying anything. 

It was also an experiment and the AI did what it was asked.

Since it was an experiment, nobody freaked out like the headline suggested.

A few other, more reputable sites have chimed in.

  • Like 1
Link to comment
Share on other sites

11 minutes ago, BeastieRunner said:

It was also an experiment and the AI did what it was asked.

Since it was an experiment, nobody freaked out like the headline suggested.

A few other, more reputable sites have chimed in.

I don't find that article - and the facebook PR response -  entirely credible. 

Firstly, why shut it down ? why not allow it to develop to see what happens while - in the background - they prepare a revised software program ? Where they worried about the electricity bill ? 

Secondly... why hasn't it been restarted ? 

Edited by RoofGardener
Link to comment
Share on other sites

10 minutes ago, BeastieRunner said:

It was also an experiment and the AI did what it was asked.

Since it was an experiment, nobody freaked out like the headline suggested.

A few other, more reputable sites have chimed in.

Ya think folks'll be like...everybody quick! Run around like your hair's on fire!!!

Lol. No. They're gonna say no worries. Even if it's slightly unnerving. 

Link to comment
Share on other sites

I've just finished a great book titled "The White Plague", the story describes an AI becoming self aware and then creating a better world. In order to make the world better it interprets it's mission as looking at the world as a whole and can see that the human race is causing serious harm to the eccology of the planet due to population growth and therefore the huge ammount of resouces required by the human population. The AI creates a BioTech firm and a NanoTech firm and develops a 'flu vaccine' to counter a percived future flu outbreak. The vaccine is eventually administered and within a short period 80% of the human population drops dead and the remainder of the population are mostly sterile. On a brighter note of the 80% that dropped dead there was a disproportionately high number of the criminal population, religious zealots, the terminally ill and the elderly, quite a nice place to live after the white plague

  • Like 2
Link to comment
Share on other sites

She just clearly did not want the balls! Maybe if Bob had bought her flowers and t0ok her out for dinner, maybe a movie....

  • Like 3
Link to comment
Share on other sites

1 hour ago, seanjo said:

You've been watching too many movies.

LOL. I cannot disagree with you there. Way too much horror/sci-fi over a lifetime. 

  • Like 1
Link to comment
Share on other sites

2 hours ago, RoofGardener said:

I don't find that article - and the facebook PR response -  entirely credible. 

(1) Firstly, why shut it down ? why not allow it to develop to see what happens while - in the background - they prepare a revised software program ? Where they worried about the electricity bill ? 

(2) Secondly... why hasn't it been restarted ? 

(1) Because they're a business trying to make money and it wasn't doing what they wanted. 

(2) I don't know where to find their paper (it was published over a month ago), but that might explain if they've stopped or were just reprogramming before going again. No articles have mentioned if they have restarted it or not. 

Link to comment
Share on other sites

9 hours ago, ChaosRose said:

The problem was that they forgot to tell the AI they wanted to understand the negotiations. 

You're correct here, I found a couple of other articles which stated that one of the programmers later realized their error. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.