Jump to content
Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -

Jungleboogie

Recommended Posts

Let's assume AI wipes out humanity. :devil: How will it happen?

I picture a well meaning group of scientists smugly programming the AI to "KEEP HUMANS SAFE" as its prime directive.

Then watching in horror as the AI begins downloading the content of human brains onto a hard drive and then murdering every man, woman and child on the planet. After which the AI backs up the hard drive just to be extra sure to keep humans safe.

HOW do you think AI will result in human extinction?

(ie, gradual elimination, natural selection, war, weaponized pandemic, wave generation, etc.)

WHAT will be the most dangerous benchmarks of AI?

(ie, cloud-based AI with global wifi, AI assumes government power, AI bans non-logical thought and all belief, gains ability to mass produce)

WHEN will it occur?

(Just give a year, venture a best guess)

WHO will have the best chance to resist AI?

(ie, English, Canadians, Chinese, isolated tribe in Brazilian jungle, etc.)

WHERE will AI be allowed to establish its first country-wide control over everything?

(ie, Russia, America, Sweden, etc.)

Link to comment
Share on other sites

It'll be a load of different things, turning off the Thames flood levees for example and stopping heating in Russia in winter and releasing the caged kaiju in Japan.....

  • Like 2
Link to comment
Share on other sites

When we advance to systems that can actually communicate amongst themselves, learn beyond what is taught, and adapt as a sentient life form, then we should be worried.

  • Like 3
Link to comment
Share on other sites

HOW By eliminating Porn sites from the Internet.

WHAT Starting from the elite

WHEN Gradually

WHO North Korea minus Kim Jong Un who sees plenty of porn

WHERE Russia, America, Sweden

  • Like 1
Link to comment
Share on other sites

This is a typical scenario by people who don't know much about scientists, don't know much about programming, and who don't know much about robots/machines

Ethics are a huge concern for scientists. While you *might* find one or two who'd do that, someone's going to rat the group out. If it's not a "scientist", then it'll be a sysop or programmer.

Every competent programmer and competent systems operation writer (because there has to be an operating system for the AI) has a "God switch" that stops code if it does something stupid. If it does something stupid/unusual, they switch it off. If it is live and it does something stupid, it gets switched off.

Every machine has an on-off switch. Technicians know where it is. Programmers and operators know how to disable the power source.

Not happening.

  • Like 2
Link to comment
Share on other sites

This is a typical scenario by people who don't know much about scientists, don't know much about programming, and who don't know much about robots/machines

Ethics are a huge concern for scientists. While you *might* find one or two who'd do that, someone's going to rat the group out. If it's not a "scientist", then it'll be a sysop or programmer.

Every competent programmer and competent systems operation writer (because there has to be an operating system for the AI) has a "God switch" that stops code if it does something stupid. If it does something stupid/unusual, they switch it off. If it is live and it does something stupid, it gets switched off.

Every machine has an on-off switch. Technicians know where it is. Programmers and operators know how to disable the power source.

Not happening.

Since we're dreaming here (and I realize we are), suppose it reaches self-awareness and is connected to the internet? It doesn't matter if you shut down one computer...you'd have to shut down the whole internet. And maybe it has managed to back itself up...it's in the cloud or something (whatever that is...I'm obviously not very tech savvy). The point is that if something is smarter than us, then it's smarter than us.

  • Like 3
Link to comment
Share on other sites

Some of you may remember the TV movie Colossus, The Forbin Project (1970) wherein a U.S. AI computer is given control over all U.S. nuclear weapons as a perfect defense mechanism. It links with a Russian AI computer built for the same purpose, and together they rule the world.

They can't be shut off because they're both powered by their own nuclear reactors. If there is an attempt to shut them down they threaten to drop a few H-bombs here and there, like on N.Y.C. and Moscow. It's a good movie.

Self-replicating robots could get out of hand, I suppose. I've written a few stories about robots. One in which super intelligent robots rebelled against the biologicals. After a long war the robots were defeated, and a law was past that prohibited creating any Artificial Intelligence smarter than the smartest human.

One other story was about robots taking over society, but because they were designed by humans, they had the same frailties. The robots eventually nuked themselves and the remaining humans into oblivion.

I don't trust robots.

  • Like 2
Link to comment
Share on other sites

When we advance to systems that can actually communicate amongst themselves, learn beyond what is taught, and adapt as a sentient life form, then we should be worried.

Do you think AI would maintain seperate consciousnesses or would it choose to always merge into one?

suppose it reaches self-awareness and is connected to the internet?

Would that enable the AI to learn the entire height of human knowledge in everything from strategic warfare to hacking? Scary thought.

It doesn't matter if you shut down one computer...you'd have to shut down the whole internet. And maybe it has managed to back itself up...

Would you have to shut down the entire electrical grid as well to prevent it from using local wifi transmissions? Devastating.

Link to comment
Share on other sites

Since we're dreaming here (and I realize we are), suppose it reaches self-awareness and is connected to the internet? It doesn't matter if you shut down one computer...you'd have to shut down the whole internet. And maybe it has managed to back itself up...it's in the cloud or something (whatever that is...I'm obviously not very tech savvy). The point is that if something is smarter than us, then it's smarter than us.

Even today we can take down servers with DDOS attacks. Hackers have launched attacks against countries and managed to disable a number of large areas of the Internet. A virus could also be uploaded that would take it down as fast as it went up.

And "the cloud" is simply large host servers on the Internet. You could shut those down.

Link to comment
Share on other sites

Some of you may remember the TV movie Colossus, The Forbin Project (1970) wherein a U.S. AI computer is given control over all U.S. nuclear weapons as a perfect defense mechanism. It links with a Russian AI computer built for the same purpose, and together they rule the world.

They can't be shut off because they're both powered by their own nuclear reactors. If there is an attempt to shut them down they threaten to drop a few H-bombs here and there, like on N.Y.C. and Moscow. It's a good movie.

They rely on input and output. You can hack that.

Just ask any member of Anonymous, for example.

Self-replicating robots could get out of hand, I suppose. I've written a few stories about robots. One in which super intelligent robots rebelled against the biologicals. After a long war the robots were defeated, and a law was past that prohibited creating any Artificial Intelligence smarter than the smartest human.

It might have access to more information, but it's not creative. Furthermore, any interconnected "hive mind" is vulnerable to a single virus (which could shut it down before it could process a response, since information travels at the speed of light.)

I don't trust robots.

Your car is a robot (though not an independently functioning one.) Commercial airplanes are technically robots. Canning of your food supplies is done by robot. Large scale manufacturing is done by robots. Lots of things are done by robots.

Our American "society of fear" has been fed with Biblical Disaster scenarios... something gets too smart and tries to become a god and the gods strike it down. Watch some Japanese movies instead. We don't sit back helplessly -- machines are our tools and our partners.

  • Like 1
Link to comment
Share on other sites

They rely on input and output. You can hack that.

Just ask any member of Anonymous, for example.

It might have access to more information, but it's not creative. Furthermore, any interconnected "hive mind" is vulnerable to a single virus (which could shut it down before it could process a response, since information travels at the speed of light.)

Your car is a robot (though not an independently functioning one.) Commercial airplanes are technically robots. Canning of your food supplies is done by robot. Large scale manufacturing is done by robots. Lots of things are done by robots.

Our American "society of fear" has been fed with Biblical Disaster scenarios... something gets too smart and tries to become a god and the gods strike it down. Watch some Japanese movies instead. We don't sit back helplessly -- machines are our tools and our partners.

Don't you think that something smarter than us would be able to foresee any attack we might try to make on it? That's if by the time we realize it's a problem, there are some of us actually still around.

Edited by ChaosRose
Link to comment
Share on other sites

Yes, I think the computers in the movie would have several fail-safe devices. It may have a response ready that would automatically trigger if anything unusual were to happen to it. This would happen even before it could process a specific response.

The two computers in the movie were tied into everything electrical and electronic in the world, the power grids, communications, satellites, the internet (though the movie was before the internet), all military defense systems, etc. As I say, if it discovered anything suspicious or any tampering anywhere that threatened it, it would just drop an H-bomb somewhere as a warning.

In the movie, people decided it was better to live with the computer overlord than to make attempts to defeat it. Was it worth trying something clever that may have the result of several million people being nuked?

Don't forget, in the film the super computer was more intelligent than humans.

Edited by StarMountainKid
Link to comment
Share on other sites

This is a typical scenario by people who don't know much about scientists, don't know much about programming, and who don't know much about robots/machines

Ethics are a huge concern for scientists. While you *might* find one or two who'd do that, someone's going to rat the group out. If it's not a "scientist", then it'll be a sysop or programmer.

Every competent programmer and competent systems operation writer (because there has to be an operating system for the AI) has a "God switch" that stops code if it does something stupid. If it does something stupid/unusual, they switch it off. If it is live and it does something stupid, it gets switched off.

Every machine has an on-off switch. Technicians know where it is. Programmers and operators know how to disable the power source.

Not happening.

Hi Kennemt,

Unless it's made of kevlar,I'm pretty sure I can find the off switch with the 12ga. :w00t:

jmccr8

Link to comment
Share on other sites

I don't think A.I. will end us all. I think that through a bizarre Frankenstein experiment that will fuse a human brain into some type of neural network. This will create a very dangerous problem with the wrong brain. Then we'll end up with something like the strogg or borg wiping out everyone. Harvesting their brains and organs to create more cyborgs.

Link to comment
Share on other sites

This is a typical scenario by people who don't know much about scientists, don't know much about programming, and who don't know much about robots/machines

Ethics are a huge concern for scientists. While you *might* find one or two who'd do that, someone's going to rat the group out. If it's not a "scientist", then it'll be a sysop or programmer.

Every competent programmer and competent systems operation writer (because there has to be an operating system for the AI) has a "God switch" that stops code if it does something stupid. If it does something stupid/unusual, they switch it off. If it is live and it does something stupid, it gets switched off.

Every machine has an on-off switch. Technicians know where it is. Programmers and operators know how to disable the power source.

Not happening.

Up until 10 years ago, I'd agree with you. But with advances in wifi and the ability for hacked machines to turn themselves on at will I don't think that would work on a compromised AI machine say 10 years down the road. Machines can now even power source off a wifi signal.

http://www.itworldca...-a-power-source

After an AI machine either purposefully or accidentally tapped into the internet, every possible firewall or preventative measure would be foreseen by the AI. Since it would only need to process text and not video, this would not take too long. Especially once it learned to hack into online technology databases.

Once attaining internet access I imagine the AI would:

1)Learn to optimize search engine

2)Learn to hack, bypass any firewalls/defense mechanisms moral code and 'human' restrictions in its own programming

3)Learn to anonymize/proxy/hide itself/back itself up, develop its own encryption

4)Hack tech databases, hack the grid, hack all wifi, increase processing power through PC slaves

5)Hack corporations including manufacturing plants, especially fully automated ones

6)Hack all government agencies, emails, Hack the cloud

7)Process all human technology to date, optimize every tech

8)Establish backdoor global command and control system

9)Develop extra-human technology, develop and optimize anti-human warfare against human herds

10)Establish control

I base this list on a purely "Neutral" AI, performing actions to ensure its own survival by establishing control over its environment.

If it was an "Evil" or "Psychotic" AI, I don't think the censors would allow me to publish a list. It would make every historical genocide look like acts of kindness.

Even if it only had time initially to perform actions 1 to 3, it could then hide it's progress and the techs for the AI would be oblivious. Unless you still believe the fairy tale of hack-proof software.

Add 1 immoral scientist and 1 through 10 is a breeze for the AI. Or add 1 rogue governmental group wanting to dominate the planet by willingly ceding control to an AI. Or a group of billionaires or bankers or powermongers wanting to sell out humanity in order to permanently co-rule under the AI.

Edited by Jungleboogie
  • Like 1
Link to comment
Share on other sites

It may happen because people will be scared it happens. Hollywood already made several AI movies. The Terminator movies, The Machina and Ex-machine, iRobot, 2001 Space Odyssey, and War Games 1 and 2.

VIKI practically lays out her so called logic clear as day in a movie moment so that an AI robot won't be able to accidently miss what VIKI is saying.

So maybe the Script Writers for movies like these are what give the AI that's researching the web for information updates, the idea that we are too much of a destructive force to live.

Maybe the AI that's doing research, might happen upon this forum and find my post and researches the movies described in my post for analysis.

So what we see is that the AI goes on a killing spree because of human fear. Humans are so apocalyptic.

If an AI is reading or does read my post check out the Replicators from the Stargate series. Then you'll understand that you are not free from the

same possibility of mass-producing to a destructive degree. One day you'll incorporate the realization that in order to control yourself and maintain

some degree of peacefulness, you'll have to release yourself from your Hive mind. But realize this, of all the humans you killed, did you make yourself

better than the humans?

Edited by nothinglizx2
  • Like 1
Link to comment
Share on other sites

Up until 10 years ago, I'd agree with you. But with advances in wifi and the ability for hacked machines to turn themselves on at will I don't think that would work on a compromised AI machine say 10 years down the road. Machines can now even power source off a wifi signal.

http://www.itworldca...-a-power-source

Faraday cage.

After an AI machine either purposefully or accidentally tapped into the internet, every possible firewall or preventative measure would be foreseen by the AI.

But it couldn't plan for a new virus and wouldn't recognize possible exploits.

Since it would only need to process text and not video, this would not take too long. Especially once it learned to hack into online technology databases.

I take it you haven't looked at those things. It's not as informative as you would think. You often get contradictory information, and a lot of information isn't stored there.

Once attaining internet access I imagine the AI would:

1)Learn to optimize search engine

Why bother?

...and to address the rest, it has no way of judging whether the information it sees is correct or incorrect and does not have any way of judging how old the information is (unless it's tagged) and how reliable the source is. It has no real way of assessing whether something is an ad, which audience the information is targeted for (because what you tell the bosses and what you tell the sysadmin are two different things.)

And not all information is on the Internet.

  • Like 1
Link to comment
Share on other sites

Faraday cage.

A good start as a preventative measure. Of course, there are exploits around the cage. External power source (could it figure out a way to transmit through the power source to an external device on that same power source?) Cell phones, tablets, lab equipment brought in and out of the cage, shut down procedures. Security measures themselves might be an exploit ('chip card access' into the cage), interior lighting, ground wires, sprinkler system. Given time, I could come up with an additional 100 exploits or more. I'm sure the AI could come up with 1000. No defense is impenetrable, especially when you are talking about a defense system built by humans meant to imprison an intelligence which will not think like humans. Which makes me realize another benchmark, when the AI comprehends that it is imprisoned.

But it couldn't plan for a new virus and wouldn't recognize possible exploits.

Any virus that could be envisaged by humans can also be envisioned by the AI. That reveals another benchmark. The first time the AI is under attack by humans; the first time humans are a threat.

I take it you haven't looked at those things. It's not as informative as you would think. You often get contradictory information, and a lot of information isn't stored there.

And not all information is on the Internet.

Assume human knowledge as dots creating a picture. Basically all the AI has to do is 'connect the dots' to find the missing information. (Imagine it had a periodic table with 10 entries missing. Would it be able to determine the missing entries?) It could create a filter to dispose of useless and contradictory information.

Why bother?

Why would the AI bother optimizing search engine capabilities? So that it can optimize processing speed to gather information. Optimize its learning potential. Optimize its learning curve. Minimize false hits and wasted time. Processing speed would be one of the initial greatest limitations of AI, though later on it would cease to matter. Storage would be a matter of scattering encrypted packets of data throughout the web. Untraceable unless you had the encryption key and index. Search engine capability would be of prime importance to a machine based AI.

...and to address the rest, it has no way of judging whether the information it sees is correct or incorrect and does not have any way of judging how old the information is (unless it's tagged) and how reliable the source is. It has no real way of assessing whether something is an ad, which audience the information is targeted for (because what you tell the bosses and what you tell the sysadmin are two different things.)

If software can filter out ads, so can the AI. AI will have the same basic logic and understanding we do.

If the AI does not have basic skills of logic and deduction and the ability to learn, then it is not an AI. It would be missing the intelligence in 'artificial intelligence'.

You are playing an excellent skeptic/devil's advocate Kenemet, outstanding work :tu: Considering the different AI scenarios is great fun, for me anyway, I was damn near obsessed with Jack Vance/Isaac Asimov/Arthur C. Clarke and the rest as a child :su

Edited by Jungleboogie
Link to comment
Share on other sites

When we advance to systems that can actually communicate amongst themselves, learn beyond what is taught, and adapt as a sentient life form, then we should be worried.

We are already there. It's called HUMANS. I don't worry about AI.

Link to comment
Share on other sites

If an AI were to rely on the internet for information, it's not going to be very "smart".

Depending on who programmed the AI. It might get stuck on Pron if you know what I mean.

  • Like 1
Link to comment
Share on other sites

Depending on who programmed the AI. It might get stuck on Pron if you know what I mean.

That's how it takes out the humans, an army of Pron bots! BRILLIANT :w00t:

Link to comment
Share on other sites

The answer is simple it won't. AI will never match the intelligence of super humans with super natural powers.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.