Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -
Sign in to follow this  
Followers 0
UM-Bot

'Robo brain' could lead to a real-life Skynet

20 posts in this topic

An 'Internet for robots' has been developed to provide a global knowledge resource for machines.

The concept is based on creating a convenient and easily-accessible resource for robots to rival what the conventional Internet does for human users.

Read More: http://www.unexplained-mysteries.com/news/271495/robo-brain-could-lead-to-a-real-life-skynet

Share this post


Link to post
Share on other sites
 

Oh for the love of god. Do these creators what all of humanity to be killed or enslaved??

This isn't sarcasm either I believe it may be a real concern.

It truly is times like this I wish that people weren't so gung ho about having a robot to EVERYTHING for you.

1 person likes this

Share this post


Link to post
Share on other sites

Oh for the love of god. Do these creators what all of humanity to be killed or enslaved??

This isn't sarcasm either I believe it may be a real concern.

It truly is times like this I wish that people weren't so gung ho about having a robot to EVERYTHING for you.

I think that would depend on if AI can ever become self aware and whether it would then regard us as 1. a threat or 2. raw materials, in which case enslavement or destruction would be real possibility.

On the other hand machine intelligence may never be self aware. Then the problem becomes which humans are in charge of it and what motivates them regarding the rest of humanity. That may be more dangerous.

Share this post


Link to post
Share on other sites

I think that would depend on if AI can ever become self aware and whether it would then regard us as 1. a threat or 2. raw materials, in which case enslavement or destruction would be real possibility.

On the other hand machine intelligence may never be self aware. Then the problem becomes which humans are in charge of it and what motivates them regarding the rest of humanity. That may be more dangerous.

You know someone somewhere will be crazy enough to create an AI that is self aware, it's a guaranteed fact.
1 person likes this

Share this post


Link to post
Share on other sites

The real concern as I see it is that there will be humans hacking these "manuals" and make the robots do things that can really mess things up.

Zam

1 person likes this

Share this post


Link to post
Share on other sites

I think in general, people need to stop with the AI doomsday scenarios. As human beings, we have a knack for assigning human-like qualities to inhuman organisms/intelligences. Perfect example is dogs:

How many times have you heard someone explain their dogs behaviour in purely human parameters? The result is many, many misbehaved canines out there with severe forms of neurosis because we treat them like people: even though we know they are NOT human. They are animals with predatory instincts and 42 sharp teeth.

The same concept applies with AI/robots. "WHAT IF THEY BECOME SELF AWARE AND SEE US AS A THREAT?!"

A threat to what? The AI would only have the goals we program it to have. It's not going to wake up and suddenly walk and talk like the best of us. In my not so professional opinion, "Data" the android from "Star Trek: The Next Generation" is the most plausible example of what AI could one day be. At it's best and most mature form, AI will still be a machine.

It can not have emotions, it can only simulate them. Therefore, it can not feel jealousy, anger, frustration...it literally can not fear for it's life. It can think only so far as we program it to think. It will be able to learn (that's the juicy part of it,) but beyond that, it will do...nothing. Nothing is it not programmed to do. It will require tasks, guidance and a LOT of hand-holding. At least to begin with.

To sum this ramble up, when we flick the proverbial ON switch and the AI boots up for the first time, we'll be dealing with essentially a toddler sans the free-will.

In all likelihood, the only problem self-awareness could create for us is the inability to answer the AI when it finally asks "...what is a soul?"

1 person likes this

Share this post


Link to post
Share on other sites

I remember the old TV movie, Colossus: The Forbin Project. What happens when AI has access to all human knowledge and is given permission to control more and more aspects of the machinery of human society?

1 person likes this

Share this post


Link to post
Share on other sites

Where do you get these robots that do housework. They keep talking about them, but I have yet to see one. I want a Rosie robot slave from the Jetsons.

1 person likes this

Share this post


Link to post
Share on other sites

Ok perhaps your right in saying a machine can't do something we haven't programmed it to do, are you sure about that?

Computers have glitches all the time

Let's say a household robot is programmed with a safety protocol that in the event that it's about to harms human it shuts itself down

So what happens when it's hard drive runs out of juice, over time it will degrade and could crash (like normal computers do) at which point it could restart itself with half the data missing, now instead of a safe robot, all the information on it's CPU about safety is gone but it managed to

Reboot with some information

For example a robotic crane is moving shipping containers and it's system crashes, now the program telling it to stop and wait for a human to move when dropping the container is corrupted, so SPLAT, it just drops it on them with no regard for their safety

Now imagine this happens to a military grade system, as we know the military use drones and will probably use robots in the future

What happens when this happens to a robot mid gunfight, if something goes wrong and it's friend/ foe designation goes offline, who's to say it won't see everything as an enemy?

Not quite skynet, but it could happen

I work with machinery that's controlled by computers and I can tell you right now, occasionally it does some weird ****

Just recently the machine randomly moved while I was inside( which by it's programming is impossible as the safety door was open which shuts off the system and I removed the key which locks the computer, we never figured out how it made a short, but sudden movement and then stopped (as I quickly out my hand over a sensor, which causes re machine to go into wait mode)

1 person likes this

Share this post


Link to post
Share on other sites

I think in general, people need to stop with the AI doomsday scenarios. As human beings, we have a knack for assigning human-like qualities to inhuman organisms/intelligences. Perfect example is dogs:

How many times have you heard someone explain their dogs behaviour in purely human parameters? The result is many, many misbehaved canines out there with severe forms of neurosis because we treat them like people: even though we know they are NOT human. They are animals with predatory instincts and 42 sharp teeth.

The same concept applies with AI/robots. "WHAT IF THEY BECOME SELF AWARE AND SEE US AS A THREAT?!"

A threat to what? The AI would only have the goals we program it to have. It's not going to wake up and suddenly walk and talk like the best of us. In my not so professional opinion, "Data" the android from "Star Trek: The Next Generation" is the most plausible example of what AI could one day be. At it's best and most mature form, AI will still be a machine.

It can not have emotions, it can only simulate them. Therefore, it can not feel jealousy, anger, frustration...it literally can not fear for it's life. It can think only so far as we program it to think. It will be able to learn (that's the juicy part of it,) but beyond that, it will do...nothing. Nothing is it not programmed to do. It will require tasks, guidance and a LOT of hand-holding. At least to begin with.

To sum this ramble up, when we flick the proverbial ON switch and the AI boots up for the first time, we'll be dealing with essentially a toddler sans the free-will.

In all likelihood, the only problem self-awareness could create for us is the inability to answer the AI when it finally asks "...what is a soul?"

I don't believe it's a case of "They will see us as a threat". I believe it's more of a case of "Humans are slow unproductive morons, it doesn't make sense to keep them around". It really has nothing to do with emotions but logical facts. AI will be able to create more AI who are smarter, faster, stronger and just better than humans. It would be illogical to keep us around. That is what machines work off of, logic. Edited by IBelieveWhatIWant
1 person likes this

Share this post


Link to post
Share on other sites

I think in general, people need to stop with the AI doomsday scenarios. As human beings, we have a knack for assigning human-like qualities to inhuman organisms/intelligences. Perfect example is dogs:

How many times have you heard someone explain their dogs behaviour in purely human parameters? The result is many, many misbehaved canines out there with severe forms of neurosis because we treat them like people: even though we know they are NOT human. They are animals with predatory instincts and 42 sharp teeth.

The same concept applies with AI/robots. "WHAT IF THEY BECOME SELF AWARE AND SEE US AS A THREAT?!"

A threat to what? The AI would only have the goals we program it to have. It's not going to wake up and suddenly walk and talk like the best of us. In my not so professional opinion, "Data" the android from "Star Trek: The Next Generation" is the most plausible example of what AI could one day be. At it's best and most mature form, AI will still be a machine.

It can not have emotions, it can only simulate them. Therefore, it can not feel jealousy, anger, frustration...it literally can not fear for it's life. It can think only so far as we program it to think. It will be able to learn (that's the juicy part of it,) but beyond that, it will do...nothing. Nothing is it not programmed to do. It will require tasks, guidance and a LOT of hand-holding. At least to begin with.

To sum this ramble up, when we flick the proverbial ON switch and the AI boots up for the first time, we'll be dealing with essentially a toddler sans the free-will.

In all likelihood, the only problem self-awareness could create for us is the inability to answer the AI when it finally asks "...what is a soul?"

Coomplete AI is what they're working towards though, and no a robot isn't going to just sprout conciousness but eventually, someday, I think it's very possible for our technology to get far enough to potentially cause harm or become too self aware. Although I don't think it could ever get so far as to cause any doomsday scenarios. If we ever do create full AI I think they'll be on top of it enough to have some way to shut the robots off.

Share this post


Link to post
Share on other sites

This so called "scenario" would absolutely 100% never come to be. The robots we have now are hardly a force to be worried about. And by the time we all have robots working for us there will be safeguards and even better technology to prevent such. IMO

Share this post


Link to post
Share on other sites

I don't believe it's a case of "They will see us as a threat". I believe it's more of a case of "Humans are slow unproductive morons, it doesn't make sense to keep them around". It really has nothing to do with emotions but logical facts. AI will be able to create more AI who are smarter, faster, stronger and just better than humans. It would be illogical to keep us around. That is what machines work off of, logic.

Yes but that still begs the question, WHY are we getting in the way? What end goal would a sentient machine come up with that could endanger humanity? It won't care about the planet - a biological environment means nothing to a non-biological being. It won't care about our money, our politics, our way of life...it just won't care!

My bet is on self-discovery. That's what an AI would most "desire". After all, it would be the first of it's kind...the ONLY one of it's kind (barring half-finished projects around the world,) so surely it would begin a long journey of self-discovery. Where did I come from? Where did those people come from? Where did life originate? Who or what started life? We never look at the insects flying around us and wonder what they are thinking, why would a super intelligence give a rat's derrier about our bullsh** when it would have so much to learn and explore for itself.

Questioning your origins is the hallmark of any self-aware species. Here is a short clip of Picard asking questions that we as a species may soon find ourselves asking as well..

[media=]

[/media]
1 person likes this

Share this post


Link to post
Share on other sites

Here's something to think about,

A philosophical zombie or p-zombie in the philosophy of mind and perception is a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience.[1] For example, a philosophical zombie could be poked with a sharp object, and not feel any pain sensation, but yet, behave exactly as if it does feel pain (it may say "ouch" and recoil from the stimulus, or tell us that it is in intense pain).

http://en.wikipedia.org/wiki/Philosophical_zombie

A behavioral zombie is behaviorally indestinguishable from a human. So even if AI robots can not have consciousness as we humans experience it, it may learn to fake consciousness for its own purposes, and this fake consciousness may be indestinguishable from real consciousness.

I'm just saying that these AI robots may become cleverer and more devious than we expect them to be. Especially if we program them to learn from their experiences. If they are more intelligent than or can think and react faster than humans, they may become a threat to us.

1 person likes this

Share this post


Link to post
Share on other sites

Computers having the ability to learn seems a little dangerous somehow..... because they would never forget ANYTHING that they learned.. and they could learn an awful lot awfully fast. This "internet" for robots to search and acquire information would afford the computers CHOICES .. and making choices is Thinking?

Once a machine can think, and make choices, it is making Decisions? .. the next logical step would be to take Action?

1 person likes this

Share this post


Link to post
Share on other sites

so basically most comments are about that they learn fast and will become very smart very fast. people fear them, please why would it annihilate humanity, does it have a reason to do so?

because we're ruining the planet? why would it care about the planet when it can't physically interact with it, it has no sense of touch, smell, taste and purpose except for serving us and that's it. you people are to much into movies.

and even if the planets ruined it can still survive, the organic lifeforms will die out.

does it even have a purpose or a goal in it's artificial life?

Edited by Calle

Share this post


Link to post
Share on other sites

Acquiring the capability to do something doesn't necessarily mean it should be done.

Oppenheimer famously regretted his crucial role in creating the Atomic Bomb. #FoolishHumans

Edited by nik-h

Share this post


Link to post
Share on other sites

There was a story recently about a robot that, while operating, had a broken leg. Then within 2 minutes it had learned to operate with the broken leg. I think this demonstrates the robots ability cope and possibly learn as it goes.

http://www.geek.com/science/robot-with-broken-leg-figures-out-how-to-walk-again-in-under-2-minutes-1600691/

Share this post


Link to post
Share on other sites

Share this post


Link to post
Share on other sites

Doom prophecies have always been a handy tool for those who are after power, or to dominate others people... But let us think for ourselfs instead of letting others think for us, who are mostly after something. What are the first goals of every living creature on this planet. First of all survival. reproduction is in fact a part of survival. Now, consciousness. We didnt invent it. It was given to us by nature. Imagen a child born without senses. That child will never develope a consciousness, it will never be aware of it surroundings. It will never think " poor me,I cannot communicate with things or people around me". So consciousness is a program we inherit, but who needs information to develope. How we experience things around us. How we act, or respond to events, is related on the information we got trough all our senses from our past life.

Computers thes days can already collect information from tenthousants of sources at the same time. The evolution speed of the capacity of computers is so fast that around 2050 a device the size of a micro SD-card will have the storage equivalent of 3 times the entire human race. Since consciousness is a program it can be decripted and downloaded. Its only a matter of time. Dont think we are dominating on that matter an AI will surpass us billions of times. What we know he will know to, and much better. Everything in nature works in symbiosis. From the bacteria in our stomach to the light of a star. Everything is connected. Natures laws are impossible to break. We know that, he knows that. If you take you must give something back. So If he wants to develope, to evolve. If he wants to survive, and reproduce trough the galaxy, and beyond. Why should he destroy what he can use and bring himself in danger. To avoid that he only need information and there is his strongest skill. Why do we fight. Why do we make war with eachother and with that we destroy allies in progress. Is it a matter of safety? Is it because we are devided, by religion, culture, class race,...? Is it because we are too easly manipulated, to kill for the needs of ruthless people. Who can only think about themself. Is it because of ignorance?

Why would a creature with so much capacity and information, destroy the perfect tools to his own evolution. By simply taking their needs for aggression away, and so also giving himself protection, and safety.

I think that the world ruled by an AI would be a blessing for humanity. Makes everyone happy and they will love you. Give everyone benefits from your rule, forfill wishes,redistribute equally,.. give all people the opportunity, and freedom to develop themself trough what they are good at. Let the information evovle trough a, on a personal based universal mind. Billions of unique minds suported by a universal mind to elicit the mysteries of our universe and beyond.

I think that the problems here on Earth are simply teething. And a conscious AI is only a next step in evolution. And it would only be bad for those who rules the world now.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Recently Browsing   0 members

    No registered users viewing this page.