Still Waters Posted June 2 #1 Share Posted June 2 An AI-controlled drone "killed" its human operator in a simulated test reportedly staged by the US military - which denies such a test ever took place. It turned on its operator to stop it from interfering with its mission, said Air Force Colonel Tucker "Cinco" Hamilton, during a Future Combat Air & Space Capabilities summit in London. "We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat," he said. "The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective." No real person was harmed. https://news.sky.com/story/ai-drone-kills-human-operator-during-simulation-which-us-air-force-says-didnt-take-place-12894929 Quote Colonel Hamilton has since said that the simulation he talked about had never actually happened and that it was just a 'thought experiment' about a hypothetical scenario where AI could turn on human operators. His comments had been published online and have since had an addendum added saying that he admits he 'misspoke' when he told people there had been a test where an AI drone killed the simulated human controlling it. https://www.ladbible.com/news/ai-military-drone-kills-human-simulation-237551-20230602 1 3 Link to comment Share on other sites More sharing options...
joseraul Posted June 2 #2 Share Posted June 2 If AI is like machine then it's about responding to commands/program/script. It doesn't have an ego to 'want to finish it's mission and get points so much that it kills its operator' 1 1 Link to comment Share on other sites More sharing options...
quiXilver Posted June 2 #3 Share Posted June 2 Can't suspend enough disbelief to ever believe anything ever said by any military official, or take it at face value. 2 Link to comment Share on other sites More sharing options...
Piney Posted June 2 #4 Share Posted June 2 1 hour ago, joseraul said: If AI is like machine then it's about responding to commands/program/script. It doesn't have an ego to 'want to finish it's mission and get points so much that it kills its operator' It's algorithms can only do addition. Not subtraction. So it "added" a target. 1 1 Link to comment Share on other sites More sharing options...
joseraul Posted June 2 #5 Share Posted June 2 14 minutes ago, Piney said: It's algorithms can only do addition. Not subtraction. So it "added" a target. Surface to Air Missile... Human person.... 1 Link to comment Share on other sites More sharing options...
pellinore Posted June 3 #6 Share Posted June 3 Turning on its operator? Reminds me of this scene: 3 Link to comment Share on other sites More sharing options...
bmk1245 Posted June 3 #7 Share Posted June 3 14 hours ago, joseraul said: If AI is like machine then it's about responding to commands/program/script. It doesn't have an ego to 'want to finish it's mission and get points so much that it kills its operator' AI work with probabilities. Go outside the script, shaite may happen, unless you have strict restrictions. 2 Link to comment Share on other sites More sharing options...
Tatetopa Posted June 4 #8 Share Posted June 4 This seems stupid enough to be clickbait fearmongering. Programming an AI to be a mercenary head hunter is not likely. Otherwise as has happened in many human forces with a demand for kills and little other control, civilians and friendlies get their heads collected too. Its bad for a military reputation if kills are indiscriminate and not precision strikes against military targets. I'm not necessarily defending the military, but like so many other pieces of this type, caution is advised. The goal is to get people to read it and say "These guys are so stupid. " It could be the stupid ones are the people that take the bait hook line and sinker. JMO. 1 Link to comment Share on other sites More sharing options...
joseraul Posted June 4 #9 Share Posted June 4 1 minute ago, Tatetopa said: This seems stupid enough to be clickbait fearmongering. Programming an AI to be a mercenary head hunter is not likely. Otherwise as has happened in many human forces with a demand for kills and little other control, civilians and friendlies get their heads collected too. Its bad for a military reputation if kills are indiscriminate and not precision strikes against military targets. I'm not necessarily defending the military, but like so many other pieces of this type, caution is advised. The goal is to get people to read it and say "These guys are so stupid. " It could be the stupid ones are the people that take the bait hook line and sinker. JMO. Gun powder has been used irresponsibly, how can we be so sure of AI? Link to comment Share on other sites More sharing options...
Tatetopa Posted June 4 #10 Share Posted June 4 9 minutes ago, joseraul said: Gun powder has been used irresponsibly, how can we be so sure of AI? I am not sure of that at all. Humans are still at the root of the problem. I would almost bet money that humans will use AI for revenge, personal gain, and out of boredom just like a video game. I just think the story might be click bait. I distrust humans too. 2 Link to comment Share on other sites More sharing options...
Guyver Posted June 5 #11 Share Posted June 5 AI computers can subtract. Whoever says they can’t is mistaken and doesn’t know math. 8 - 7 = 1. 8 + (-7) = 1 its simple algebra and the machines know it. 1 Link to comment Share on other sites More sharing options...
Electric Scooter Posted June 5 #12 Share Posted June 5 On 6/2/2023 at 7:55 PM, Still Waters said: An AI-controlled drone "killed" its human operator in a simulated test reportedly staged by the US military - which denies such a test ever took place. It turned on its operator to stop it from interfering with its mission, said Air Force Colonel Tucker "Cinco" Hamilton, during a Future Combat Air & Space Capabilities summit in London. "We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat," he said. "The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective." No real person was harmed. https://news.sky.com/story/ai-drone-kills-human-operator-during-simulation-which-us-air-force-says-didnt-take-place-12894929 https://www.ladbible.com/news/ai-military-drone-kills-human-simulation-237551-20230602 I think this might be something I posted on elsewhere. Is it the one where when they sent the drone back up it not only decided to kill the operator again, but took steps to stop the operator stopping it. It cut the operator out of the loop. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now