Still Waters Posted June 4 #1 Share Posted June 4 (IP: Staff) · There are calls for more public education surrounding the future role of artificial intelligence, amid claims that many people's fears are based on films. Rashik Parmar, chief executive of BCS, The Chartered Institute for IT, said Hollywood blockbusters like Terminator and Ex Machina had "ingrained" public concerns about AI. His words came after a letter was released by the San Francisco-based Centre For AI Safety warning the technology could wipe out humanity and the risk should be treated with the same urgency as pandemics or nuclear war. Mr Parmar said: "There should be a healthy scepticism about big tech and how it is using AI, which is why regulation is key to winning public trust. https://news.sky.com/story/terminator-and-other-sci-fi-films-blamed-for-publics-concerns-about-ai-12895427 1 Top Link to comment Share on other sites More sharing options...
quiXilver Posted June 4 #2 Share Posted June 4 I wonder how many of them have read the source material that was the inspiration for that film and the others... Link to comment Share on other sites More sharing options...
Electric Scooter Posted June 4 #3 Share Posted June 4 33 minutes ago, Still Waters said: There are calls for more public education surrounding the future role of artificial intelligence, amid claims that many people's fears are based on films. Rashik Parmar, chief executive of BCS, The Chartered Institute for IT, said Hollywood blockbusters like Terminator and Ex Machina had "ingrained" public concerns about AI. His words came after a letter was released by the San Francisco-based Centre For AI Safety warning the technology could wipe out humanity and the risk should be treated with the same urgency as pandemics or nuclear war. Mr Parmar said: "There should be a healthy scepticism about big tech and how it is using AI, which is why regulation is key to winning public trust. https://news.sky.com/story/terminator-and-other-sci-fi-films-blamed-for-publics-concerns-about-ai-12895427 I cannot provide a source for this, it was on one of Simon Wheelers YouTube videos possibly on Today I Found Out. Anyway, the US has been testing an AI controlled stealth UAV and last week it decided to attack its command centre. The experts said the AI decided an attempt by the command centre to return it home was perceived as a threat. Now that wasn`t an issue, there were humans in the loop to stop it killing everyone. But when they sent the drone back up and it received the order to return home again then it remembered what happened the first time. So, still perceiving the command as a threat, it decided again to attack the command centre but this time altered itself to remove the humans from the loop. This is the problem when creating an AI in charge of a weapons platform. It needs to decide if it has been compromised by enemy humans and therefore if it should ignore their commands. But as humans originally setup the AI then what happens if they make a mistake? Or the enemy finds a way to trick the AI? Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now