Science & Technology
'Ethical' robot makes deadly decisions
By
T.K. RandallSeptember 21, 2014 ·
17 comments
Isaac Asimov suggested that robots should be given ethical constraints. Image Credit: IEEE Spectrum
A robot programmed to save people from an imminent threat turned out to be worryingly indecisive.
Researchers at Bristol University set up an experiment in which a robot was programmed specifically with the task of preventing other robots from falling down a hole.
The concept was based on exploring robot ethics as outlined by author Isaac Asimov who suggested that robots be governed by a set of moral principles requiring them to prevent any human from coming to harm as a direct result of their own inaction.
When the research team ran the experiment they found that the robot tasked with saving the other robots only managed to do so around half the time. The problem seemed to stem from the fact that when two robots needed to be rescued at the same time, the robot doing the rescuing dithered around too long trying to figure out who to help, ultimately resulting in both falling down the hole.
"It notices one human robot, starts toward it but then almost immediately notices the other," said roboticist Alan Winfield. "It changes its mind. And the time lost dithering means the Asimov robot cannot prevent either robot from falling into the hole."
The experiment mirrors concerns raised about the decision making of self-driving cars which will be required to keep both their own passengers and other road users safe from harm.
Source:
Yahoo! News |
Comments (17)
Tags:
Robot
Please Login or Register to post a comment.