Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -
Sign in to follow this  
Followers 0
space11498

Three Laws of Robotics

8 posts in this topic

Posted (edited)

Hello readers,

I have read many books on robots.I came across the three laws of robotics ,especially in Isaac Asimov stories. The three laws are namely:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

My question is ,if scientists start building robots based on the three laws in large numbers for commercial as well as for domestic use,can the laws be altered to change the function and the thought process of the robot ? Can these type of alterations reach a level of human destruction which under normal sense is prohibited by these laws ??

Please share your valuable views.

Edited by space11498

Share this post


Link to post
Share on other sites

Isaac Asimov created the 3 laws of robotics in his book: I,Robot ( if memory serves).

As far as applying them when the technology has reached that level, maybe on a domestic level they might come in to play but,IMO, the first truly advanced humanoid robots will most like be a military design and the 3 laws would make the machines pretty much useless it they weren't able to kill human beings which, being military would seem to defeat the purpose of their very design.

So applying the 3 laws to Marvin the metal butler, maybe: To killbot XR7, not at all.

1 person likes this

Share this post


Link to post
Share on other sites

There will always be hackers, and the robots themselves, given a high level of intelligence, may figure a way around the laws. I don't trust robots.

Share this post


Link to post
Share on other sites
Can these type of alterations reach a level of human destruction which under normal sense is prohibited by these laws ??

I think you're under the mistaken impression that these are actually "laws" in the legal sense. But in fact, are science fiction i.e. make believe

1 person likes this

Share this post


Link to post
Share on other sites

Thank you all for your responses.

I think you're under the mistaken impression that these are actually "laws" in the legal sense. But in fact, are science fiction i.e. make believe

If in the future humans will work with robots , then these three laws will come to high priority.So these laws will be legal,won,t it?

If wrong ,please correct.

Isaac Asimov created the 3 laws of robotics in his book: I,Robot ( if memory serves).

As far as applying them when the technology has reached that level, maybe on a domestic level they might come in to play but,IMO, the first truly advanced humanoid robots will most like be a military design and the 3 laws would make the machines pretty much useless it they weren't able to kill human beings which, being military would seem to defeat the purpose of their very design.

So applying the 3 laws to Marvin the metal butler, maybe: To killbot XR7, not at all.

I agree with you. But if the military robots will not be made with the three laws , won't these robots harm the friendly officials if these robots are given misleading commands by the enemy?

So , the laws must be used in these robots too after certain necessary alterations.

If wrong ,please correct.

Share this post


Link to post
Share on other sites

Posted (edited)

I think you're under the mistaken impression that these are actually "laws" in the legal sense. But in fact, are science fiction i.e. make believe

This.

Seriously space, don't mistake fantasy for being reality.

Yes, a lot of developments in technology have preludes in science-fiction. But it doesn't make science-fiction law.

And fyi, any program can be hacked and altered for the time being. When quantum computing becomes more of an everyday reality this might change, but completely impossible? It will never be impossible no.

Edited by Render

Share this post


Link to post
Share on other sites

Hello readers,

I have read many books on robots.I came across the three laws of robotics ,especially in Isaac Asimov stories. The three laws are namely:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

My question is ,if scientists start building robots based on the three laws in large numbers for commercial as well as for domestic use,can the laws be altered to change the function and the thought process of the robot ? Can these type of alterations reach a level of human destruction which under normal sense is prohibited by these laws ??

Please share your valuable views.

If I recall correctly, Asimov's Laws of Robotics were integral both physically and electronically to the "Positronic brain" of the future robots, and without the three laws, the robots couldn't function to begin with. In other words, the laws weren't written on an electronic list somewhere in the robot's memory bank, but rather the entire AI of the robot reflected the three laws. This is what led to a type of vandalism, for lack of a better word, with young people cornering a robot and attempting to logic it into breakdown by confronting it with paradoxes created by the three laws. When the robot could not reconcile the paradox, the brain shut down.

Asimov's three laws did evolve and change as his writing moved forward and other writers began using his concept. He later added a "Zero" law (because it would have priority over the other three laws) which was: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

There was also a fourth law, however I can't remember if it was Asimov's or not (there have been multiple laws created by other writers, but I tend to stick to Asimov's). It was: A robot may not restrain or hold captive a human unless the human's life is in imminent danger. This was, presumably, to keep the robots from deciding that the best way to protect humans was to place them under house arrest.

Share this post


Link to post
Share on other sites

Posted (edited)

If in the future humans will work with robots , then these three laws will come to high priority.So these laws will be legal,won,t it?

If wrong ,please correct.

Humans already work with robots. And they occasionally already do harm humans. They can only perform what they are programmed to do.

These "laws" you speak of are fictional ideas have no basis in reality. Do you understand the difference between fiction and non-fiction, between reality and fantasy, between real and unreal?

So no, they aren't legal, and would only become so should some real life authority pass them as real laws.

They do form the basis of interesting discussion. What you're really wondering is about machines that can think for themselves or have independent thought. It remains a difficult task to pass the Turing test. I suggest this for further reading https://en.wikipedia.org/wiki/Turing_test

Edited by ninjadude

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Recently Browsing   0 members

    No registered users viewing this page.