iAlrakis Posted June 3 #101 Share Posted June 3 after doing some more reading about this it seems possible AI can kill us but only if we allow it. Meaning that we need a set of rules for it to follow, just like Asimov came it with rules for robots. Not sure if the article/test was real but it was a great example anyway. Let's say we train AI to fly drones and make sure the enemy is killed. Now add to that a real pilot that does something that in the AI's calculations lowers the chance of killing the enemy. How will it respond? Will it kill? Better be safe than sorry right. But in general I think it's more about clickbait and getting attention for a technology you're working on than a real threat. 1 Link to comment Share on other sites More sharing options...
Alchopwn Posted June 4 #102 Share Posted June 4 Someone probably needs to mention the Roko's Basilisk thought experiment, if we haven't already. 2 Link to comment Share on other sites More sharing options...
+joc Posted June 4 #103 Share Posted June 4 (edited) 4 hours ago, Alchopwn said: Someone probably needs to mention the Roko's Basilisk thought experiment, if we haven't already. Thanks for mentioning that. I've never heard of it...but AI has, and so, it is already part of it's curriculum. I don't know, I see the danger of AI as somewhat analogous to the highway systems we have. One could say the highways are deadly. In reality it is the stupidity of those driving that is deadly. In other words, AI taking over our lives is correlated to everything else that we allow to take over our lives...phones, cars, tvs, etc. In the end it isn't AI that is dangerous. It is our own stupidity that makes AI dangerous. Edited June 4 by joc 3 Link to comment Share on other sites More sharing options...
Alchopwn Posted June 5 #104 Share Posted June 5 10 hours ago, joc said: Thanks for mentioning that. I've never heard of it...but AI has, and so, it is already part of it's curriculum. I don't know, I see the danger of AI as somewhat analogous to the highway systems we have. One could say the highways are deadly. In reality it is the stupidity of those driving that is deadly. In other words, AI taking over our lives is correlated to everything else that we allow to take over our lives...phones, cars, tvs, etc. In the end it isn't AI that is dangerous. It is our own stupidity that makes AI dangerous. JEEZUS ! ! ! NOW I'M GODDAMNED TERRIFIED ! ! ! HAVE YOU MET PEOPLE ? ? ? 6 Link to comment Share on other sites More sharing options...
psyche101 Posted June 5 #105 Share Posted June 5 On 6/3/2023 at 9:26 AM, Guyver said: Adding a negative is subtracting. It's adding a negative. A long way around because a computer can't simply subtract - adding is straightforward, whereas for subtracting you have to indicate which one is the minuend and which is the subtrahend. On 6/3/2023 at 9:26 AM, Guyver said: And yes, they can invent, they have invented language that people don’t understand. No, that was a type of shorthand. It's not an "invention". The 'creepy Facebook AI' story that captivated the media Hence my reference to GOTG3. Rocket was an anomaly because he count invent new things. Nothing else the high evolutionary created could think of new things. Only what already exists. 1 Link to comment Share on other sites More sharing options...
Kenemet Posted June 6 #106 Share Posted June 6 Honestly, people, all it takes is a simple power outage. If a transformer blows and power gets shut down (power banks can only last so long) then the AI loses access to its data. While there's distributed redundancy, it only goes so far and relies on the technology... and some of the web servers, etc, out there are not very robust or even very big. And it's easily manipulated by changing its databases. 4 Link to comment Share on other sites More sharing options...
psyche101 Posted June 7 #107 Share Posted June 7 3 hours ago, Kenemet said: Honestly, people, all it takes is a simple power outage. If a transformer blows and power gets shut down (power banks can only last so long) then the AI loses access to its data. While there's distributed redundancy, it only goes so far and relies on the technology... and some of the web servers, etc, out there are not very robust or even very big. And it's easily manipulated by changing its databases. Exactly. Or a power surge or drop a phase and create a brownout. Machines are easily broken. 2 Link to comment Share on other sites More sharing options...
Hawken Posted June 7 #108 Share Posted June 7 On 5/18/2023 at 2:09 AM, and-then said: When minds like those of Hawking and Musk are concerned, that's more than enough to justify being concerned. The real danger is that we cnnot stop it now. National militaries will demand the research for weaponization and they'll get it. Yes, Hawking was concerned about advertising our whereabouts by sending probes into space like the Voyagers. A hostile species could intercept the signals and come to earth and do as they please. Link to comment Share on other sites More sharing options...
+Desertrat56 Posted June 7 #109 Share Posted June 7 10 hours ago, Hawken said: Yes, Hawking was concerned about advertising our whereabouts by sending probes into space like the Voyagers. A hostile species could intercept the signals and come to earth and do as they please. A hostile species like us? 2 Link to comment Share on other sites More sharing options...
Hawken Posted June 7 #110 Share Posted June 7 1 minute ago, Desertrat56 said: A hostile species like us? That's true. You even see it on these threads. 1 Link to comment Share on other sites More sharing options...
Guyver Posted June 13 Author #111 Share Posted June 13 (edited) So, apparently 60 minutes did a presentation on AI yesterday. Unfortunately, I was unable to view it, although I had it dvr’d, because the golf match went long, and it was great, so good for the Canadians, their boy finally won after like 64 years or something…..anyway, I missed it. Then, I tried to re-record the replay and that didn’t work, so I tried to buy it, couldn’t. So, if anyone did catch that presentation, I would be interested in hearing your re-cap and opinion. Thank you. Edited June 13 by Guyver Add Link to comment Share on other sites More sharing options...
+joc Posted June 13 #112 Share Posted June 13 On 6/6/2023 at 11:23 PM, Hawken said: Yes, Hawking was concerned about advertising our whereabouts by sending probes into space like the Voyagers. A hostile species could intercept the signals and come to earth and do as they please. A hostile species has already been coming to Earth for the last 450,000 years. The Annunaki. 1 3 Link to comment Share on other sites More sharing options...
Trelane Posted July 2 #113 Share Posted July 2 On 6/13/2023 at 8:05 AM, joc said: A hostile species has already been coming to Earth for the last 450,000 years. The Annunaki. DISCLOSURE!!!!! 1 2 3 Link to comment Share on other sites More sharing options...
Kenemet Posted July 5 #114 Share Posted July 5 On 6/6/2023 at 11:23 PM, Hawken said: Yes, Hawking was concerned about advertising our whereabouts by sending probes into space like the Voyagers. A hostile species could intercept the signals and come to earth and do as they please. If they're that advanced, they'd know the signals weren't a threat. 1 1 Link to comment Share on other sites More sharing options...
ReadTheGreatControversyEGW Posted August 6 #115 Share Posted August 6 On 5/17/2023 at 10:40 PM, Guyver said: A.I. machines are dangerous, and could kill us. As they're programmed to be Link to comment Share on other sites More sharing options...
Trelane Posted August 8 #116 Share Posted August 8 "A.I. Computers will Kill Us" God, I hope so. If it means the end of speculative sci-fi nonsense, then so be it. 2 Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now