Could we end up in a world dominated by sentient machines ? Image Credit: CC BY 2.0 Dick Thomas Johnson
General Paul Selva has spoken out about the dangers of creating fully autonomous weapon systems.
The risks of giving autonomy to so-called 'killer machines' have been highlighted time and again over the years in movies such as 'The Terminator' and 'The Matrix', but could an intelligent computer ever truly decide that humanity is a threat and attempt to wipe us off the planet ?
During a recent Senate Armed Services Committee hearing, US General Paul Selva expressed how important it was to maintain human control over systems capable of killing other people "lest we unleash on humanity a set of robots that we don't know how to control."
"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," he said. "There will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action."
It's a concern that has been brought up many times by SpaceX CEO Elon Musk who has warned that intelligent machines represent a "fundamental risk to the existence of civilization."
"I have access to the very most cutting edge AI, and I think people should be really concerned about it," he said. "AI is a rare case where I think we need to be proactive in regulation instead of reactive."
"Because I think by the time we are reactive in AI regulation, it's too late."
Source: IB Times | Comments (8)