Wednesday, April 30, 2025
Contact    |    RSS icon Twitter icon Facebook icon  
Unexplained Mysteries Support Us
You are viewing: Home > News > Science & Technology > News story
Welcome Guest ( Login or Register )  
All ▾
Search Submit

Science & Technology

Scientists pledge to implement AI safeguards

By T.K. Randall
January 13, 2015 · Comment icon 41 comments

Could intelligent machines pose a risk to our future ? Image Credit: sxc.hu
Several big names have signed a letter aimed at ensuring that AI research remains beneficial to mankind.
Professor Stephen Hawking recently outlined the dangers of artificial intelligence, suggesting that if a computer was to ever become self-aware its capabilities could very quickly supersede those of its creators and the whole of humanity would be at risk.

Eager to avoid such a fate, the celebrated physicist joined several other prominent figures such as SpaceX CEO Elon Musk in signing a letter aimed at ensuring that AI platforms are kept under control and that the technology is only ever used to benefit humanity.
The project was spearheaded by the Future of Life Institute which aims to identify and address possible risks to human civilization. The institute stated that there was now a "broad consensus" that the development of artificial intelligence was growing steadily and that as time goes by it would have an increasingly significant impact on our society.

The dangers of AI are perhaps best demonstrated through science fiction movies such as The Matrix and The Terminator which depict scenarios in which hostile, intelligent machines have effectively wiped out mankind and taken the planet for themselves.

"Our AI systems must do what we want them to do," the group wrote in the letter.

Source: BBC News | Comments (41)




Other news and articles
Our latest videos Visit us on YouTube
Recent comments on this story
Comment icon #32 Posted by cyclopes500 10 years ago
What I'd like is a Gremlin like in the film. Not gizmo the green ones. Letter box size would be ideal.In packs of 4. Green rubber body, big ears, razor sharp teeth advanced AI brain programmable to wreck the joint belonging to the bloke or bird you've had a bust up with while they're out, or give them a wake up call they'll never forget. It'd be the perfect early Christmas morning surprise for anybody. Buy a case of them at the local bulk buy and you could send them as gifts to the local police, law courts, council offices, lawyers, ex wife or husband, ex boy or girlfriend,teachers, headmaster... [More]
Comment icon #33 Posted by StarMountainKid 10 years ago
I can envision future AI robots cooking up human embryos to serve them.
Comment icon #34 Posted by aquatus1 10 years ago
Why, exactly, would finding humans flawed and obsolete lead to the destruction of humanity? I find any number of people flawed and obsolete. I tend to ignore them. I have a select few that I actively dislike, and consider little more than obstacles. There are even a tiny little fraction of people I have known throughout my life whom I believe would most benefit the world by departing it. Yet, I haven't found it worth the effort to take any action against any of them, let alone genocide. Why would a robot? What would be the benefit? I tend to think that intelligent robots would tolerate humans ... [More]
Comment icon #35 Posted by ChaosRose 10 years ago
This is one of the few hypothetical doomsday scenarios that I actually take seriously. If I was a non-human intelligence, I would certainly consider us a threat.
Comment icon #36 Posted by aquatus1 10 years ago
Why?
Comment icon #37 Posted by Junior Chubb 10 years ago
Speaking as devils advocate on this because IMO AI is BS... Why are we a threat to an AI? We can theoretically pull the plug on it. But then this is a purely hypothetical situation, so you could argue that we may not be able to pull the plug on it, but in my scenario we can.
Comment icon #38 Posted by ChaosRose 10 years ago
Why? We're dangerous to ourselves, every other living thing on the planet, the planet itself, and even beyond that.
Comment icon #39 Posted by aquatus1 10 years ago
We're dangerous to ourselves, every other living thing on the planet, the planet itself, and even beyond that. Why would a computer care about that?
Comment icon #40 Posted by nothinglizx2 9 years ago
Then please tell the monkeys not to evolve.
Comment icon #41 Posted by Rlyeh 9 years ago
In my little science fiction stories, after the Biologicals defeated the Cyberoids, a law was passed by the Galactic Council that no AI could be more intelligent than the most intelligent biological Galactic species. What is stopping a biological intelligence from being dangerous?


Please Login or Register to post a comment.


Our new book is out now!
Book cover

The Unexplained Mysteries
Book of Weird News

 AVAILABLE NOW 

Take a walk on the weird side with this compilation of some of the weirdest stories ever to grace the pages of a newspaper.

Click here to learn more

We need your help!
Patreon logo

Support us on Patreon

 BONUS CONTENT 

For less than the cost of a cup of coffee, you can gain access to a wide range of exclusive perks including our popular 'Lost Ghost Stories' series.

Click here to learn more

Recent news and articles