Science & Technology
Scientists pledge to implement AI safeguards
By
T.K. RandallJanuary 13, 2015 ·
41 comments
Could intelligent machines pose a risk to our future ? Image Credit: sxc.hu
Several big names have signed a letter aimed at ensuring that AI research remains beneficial to mankind.
Professor Stephen Hawking recently outlined the dangers of artificial intelligence, suggesting that if a computer was to ever become self-aware its capabilities could very quickly supersede those of its creators and the whole of humanity would be at risk.
Eager to avoid such a fate, the celebrated physicist joined several other prominent figures such as SpaceX CEO Elon Musk in signing a letter aimed at ensuring that AI platforms are kept under control and that the technology is only ever used to benefit humanity.
The project was spearheaded by the Future of Life Institute which aims to identify and address possible risks to human civilization. The institute stated that there was now a "broad consensus" that the development of artificial intelligence was growing steadily and that as time goes by it would have an increasingly significant impact on our society.
The dangers of AI are perhaps best demonstrated through science fiction movies such as
The Matrix and
The Terminator which depict scenarios in which hostile, intelligent machines have effectively wiped out mankind and taken the planet for themselves.
"Our AI systems must do what we want them to do," the group wrote in the letter.
Source:
BBC News |
Comments (41)
Tags:
Stephen Hawking, Artificial Intelligence
Please Login or Register to post a comment.