Jump to content
Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -

God and robots: Will AI transform religion?


Still Waters

Recommended Posts

Just now, Mr Walker said:

No; my ideal world is the one I would like to see

It exists in parts but not in other parts.

I've spent my life trying to reshape the world to be a bit more as I like it, with a bit of success 

Your last point is interesting 

Modern robotics companies  have found that most humans prefer their robots NOT to look too human like, because that scares them even more  

The exception is where the y are designed to have a human purpose, such as a sex toy, or carer, or guide. 

Hi Walker

Thanks but we are not talking about your life's work this thread is about AI and religion and I brought clones in as an extension on the subject so let us for the sake of mankind try to stay within those boundaries as it will be easier to maintain the discussion without starting a personality war.:tu:

I am not sure if you understand all of the implications people buy sex dolls so we know there is a segment of society that will buy robots to look very much like human and will be ordering them with the bits they like. There are other reasons like a security double for gov't, business and criminal executives that will by a bot to take the bullet and they will want them to look very much like them. I know a lot of things in your world are not like they are in the rest of the world but that does not mean that they do not or cannot exist in the real world

  • Like 2
Link to comment
Share on other sites

23 hours ago, Tatetopa said:

Good evening Mr. Walker.  I wonder if you have considered that AI may not evolve as humans have and may not stumble into the same conundrums.

Interesting question.

Two points. We only have humans to study, but also a few animals which are slowly evolving human level awareness.

Second; design is faster  and more certain than evolution, and AIs will be designed by an intelligent being (unlike us, who took millions of yeas to evolve )

 Thusm the y are  quite soon likely  to be more efficient and capable than us, both physically and mentally 

Eg we have  already built military robotic drones which are faster, stronger, and more efficient, as well as being  more durable than human soldiers 

The next question is whether to make them autonomous ie give the robot the choice on when to open fire on a target . 

Link to comment
Share on other sites

21 hours ago, Tatetopa said:

 I have held that analogy, but now I begin to question it.  What we provide for our children is information both factual and value oriented, coaching, and sometimes inspiration. Probably a lot more. Great parents do want their kids to mature into independent people.

But I think it is possible for a human to manipulate the architecture of the  AI brain at a much deeper level than even Gimli's ax in their nervous system.  The comparator circuits or whatever we have that assess inputs and output signals for action could be balanced in a different manner.  For example, our fight or flight circuit could be changed to be 99% fight and 1% flight. Other comparison circuits could be eliminated and possibly unique ones added.  Designers might put constraints on  AI thinking  that would prevent thought induced value changes. 

Related is  AI evolution.  We make the assumption that AI is going to evolve  but I am not sure it is inevitable.   Human evolution is beholden to genetic variation.  Success  or failure of an individual might be due to a cosmic ray induced mutation.   Human designers of the first generation  might choose to encourage uniqueness of personality by near imperceptible alterations in various aspects of AI architecture.   

Or an AI might be perfectly copied  from one iteration to the next, the child AI being identical in neural net construction to the parent.  All AI might be as identical as SIRI because being known and predictable is a desirable mass marketing trait. 

Both are possible.

 

 

 

 

Indeed both are possible and  probably both will be designed and created The y will likely serve different purposes.

Ie we may chose to limit the autonomy of some intelligences, while giving others free reign to evolve and improve themselves. 

Link to comment
Share on other sites

1 hour ago, jmccr8 said:

Hell I am heading back out to the resort for a couple of days work and be home Thursday night may not say much when using my phone the next bit.

A good enough time to slice the finer details out of "Will AI transform religion?"

Let's dig back further into time... If the religious establishment decides to worship a rock instead of a tree... What changed? Did religion actually  changed or just the label? 

Or "created in God's own image" meant the image you see or have? 

Then the poor beautiful butterfly was created in some image that this God has no similarities to... Sad but true 

~

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

The only thing I think will result from AI preachers is an automated sermon. 

I think it's going to a hybrid system of organic and computer to achieve true AI. 

  • Like 1
Link to comment
Share on other sites

7 hours ago, OverSword said:

I knew since I was a kid that even given 12 hours a film could never do justice to Dune.  I will probably see this but will view it as it's own thing, not expect them to do a satisfactory job of recreating the novel as that is just begging for disappointment.

The original was a well produced movie,  (not surprising given that it was written and directed by David Lynch) but didn't do the books justice. Very few films ever can  

  • Like 1
Link to comment
Share on other sites

3 hours ago, Hammerclaw said:

I think replicating human consciousness is a pipe dream, myself. Any functional AI we create will be far from human, virtual intelligences and nothing more. Sophia is a very crude virtual intelligence.

Like the Babbage engine /machine was a very  crude computer.

  In 100 years our :Sophias" will be indistinguishable from  humans, except that they will be more intelligent,  more capable, and possibly more human (ie have more humane human qualities  than most humans :) )

Link to comment
Share on other sites

24 minutes ago, jmccr8 said:

Hi Walker

Thanks but we are not talking about your life's work this thread is about AI and religion and I brought clones in as an extension on the subject so let us for the sake of mankind try to stay within those boundaries as it will be easier to maintain the discussion without starting a personality war.:tu:

I am not sure if you understand all of the implications people buy sex dolls so we know there is a segment of society that will buy robots to look very much like human and will be ordering them with the bits they like. There are other reasons like a security double for gov't, business and criminal executives that will by a bot to take the bullet and they will want them to look very much like them. I know a lot of things in your world are not like they are in the rest of the world but that does not mean that they do not or cannot exist in the real world

My ideal world is the one I would like to see as an ideal.

As a human i have a responsibility to try to shape that world as much as possible.

An AI might evolve the same sense of civic responsibility and purpose.

There are no boundaries to concept's like this 

Indeed I explained that for certain purposes, humans want  human looking robots, BUT studies also show that human looking robots can scare people  and are often not efficient 

Its irrelevant really.

What defines a human is our cognitive alibies  Give another entity (animal or machine) the same abilities and we also have to give them the same rights a d responsibilities as adult humans,  no matter what the y look like 

We once kept slaves because  they  looked difernt to us  and we though them inferior,  even sub human

We don't want to repat that mistake with animals or machines which can think as we do 

 

Link to comment
Share on other sites

17 minutes ago, third_eye said:

A good enough time to slice the finer details out of "Will AI transform religion?"

Hi Third_eye

I guess it depends on how I look at it. If I work from the position that garnering supporters to maintain a profit flow then yes I think AI will transform religions in one manner or the other whether they are pro or con and from the one link I gave we can see that some religions are more flexible in their attitudes but all of them have already some form of robobible integrating to some degree some with less inhibition than others. What the final outcome is can only be speculation but would think that the Christian faiths may have more internal problems adapting.

26 minutes ago, third_eye said:

Let's dig back further into time... If the religious establishment decides to worship a rock instead of a tree... What changed? Did religion actually  changed or just the label?

 I am not sure that people will worship AI as a god or how many would see it as god like so will work from the will they be accepted into a religious community as believers and having faith in a god as their fellow man? Whether I believe it believes or other believe it can is unimportant I am an observer which is why I brought it here to see what others think or how they react so I can get a real time feeling of how things may transition. Of course this is a small study number given how many have responded so it is of no scientific value but still is a learning tool for me.

33 minutes ago, third_eye said:

Or "created in God's own image" meant the image you see or have? 

For me god is a description of intelligence or the process of thinking, Aristotle thought that the closest we could come to god was though reflection and for me the closest we came come to god is to think. We change with what we create because what we create changes in the world we live in.

37 minutes ago, third_eye said:

Then the poor beautiful butterfly was created in some image that this God has no similarities to... Sad but true 

I suppose that may be true if I thought there was a something/one else that could create.

  • Like 4
Link to comment
Share on other sites

1 hour ago, Mr Walker said:

My ideal world is the one I would like to see as an ideal.

Hi Walker

It doesn't matter what your ideal world is because that is not where what we are talking about is going to happen within the parameters of this discussion so lets stick with how people that live in the real world will deal with this issue as it unfolds.

1 hour ago, Mr Walker said:

As a human i have a responsibility to try to shape that world as much as possible.

 Well go put a cape on and save the world but it still has nothing to do with what we are taking about none of us are in this thread to change the world we are having a coffee and discussing a subject.

1 hour ago, Mr Walker said:

An AI might evolve the same sense of civic responsibility and purpose.

Maybe, we don't know and that is what we are talking about.

1 hour ago, Mr Walker said:

There are no boundaries to concept's like this 

I wouldn't say no boundaries what do you think I meant when I I was saying  we weren't talking about your ideal world.:huh:

1 hour ago, Mr Walker said:

Indeed I explained that for certain purposes, humans want  human looking robots, BUT studies also show that human looking robots can scare people  and are often not efficient 

No you said people didn't want them to look too human and I gave you a response that you are now being responded to about again. It was the first time in this thread that it has been brought up.

1 hour ago, Mr Walker said:

Its irrelevant really.

What is? I have no idea what you are specifically responding to, if you are not going to be more specific in what you are responding to I will ignore it as pointless.

1 hour ago, Mr Walker said:

What defines a human is our cognitive alibies  Give another entity (animal or machine) the same abilities and we also have to give them the same rights a d responsibilities as adult humans,  no matter what the y look like 

A bike, a car, a skateboard is more efficient than me for getting places faster should I give it special consideration for being a tool?

Just because you have a particular point of view does not mean that it is a common belief, Most of us are not discussing this from a biased pov and are discussing this as a subjective impersonal topic.

Edited by jmccr8
missing d
  • Like 3
Link to comment
Share on other sites

2 hours ago, Mr Walker said:

The exception is

Hi Walker

The fact that there are exceptions means there that sector exists and cannot be discounted in consideration or discussion of it's relative subject because it is a quality of the subject no matter how insignificant you may think it is. Slaves are illegal and yet women and children are sole every day so who is to say that an AI could/would be a property for some. People will steal anything so kidnapping AI'S and sell them  no different than stealing a car and sell it, some people likely will be able to reprogram how it will operate and be used to serve it's owner. There will be some elements of society that will see them as human no matter what the laws are just like the way they do with women and children now.

2 hours ago, Mr Walker said:

Plus of course the human body is very inefficient and robots are more efficient when designed not to resemble humans .

That may be a matter of perception that everyone does not share as there are 8 billion people with opinions and experiences  so yes you have an opinion and would like you to tell me why a body is inefficient as I have used mine very effectively for decades. for the things I've done for a living and for play. and after listening to and reading about those 2 AI doubt that they would have even tried to do many of the things I took chances on.:innocent::whistle:

They are not going to be superhuman in any sense and think you overstating some qualities a bit at this stage of AI integration, mostly they are in a box on or under a desk somewhere so they can talk the talk but not walk the walk yet. Robotics is advancing and  but nowhere near what we can do in the physical world as thing are unless I can say pass me a 9/16 and talks I'll likely use it for an intelligent bottle opener in the garage so we have a way to go yet.

3 hours ago, Mr Walker said:

Ps I dont hate any animals so i am not a good perron to ask

The video said hate not me the reaction by people to certain things is real so  the form definitely would affect perception and how people react it was just an example some forms will integrate easier for a significant number. There are a lot of people who are scared of tech which is why conspiracy theories abound so there are many aspects of social evolution that will occur and I am not making forecast or promoting anything and posted links to give some idea of more than one group thinking. I don't know but as much as I love this world it is always in conflict so why would I think this transition will be like turning on a Mr. Rodgers show, no it will not be the same as buying a talking ipad with an avatar. There will be a gradual tech introduction with AI robots that will be introduced and doubt that just because it is a robot that it is sentient that robots that are not sentient or there programming allows them to interact with humans and are specifically built for service whether it if fabrication, construction, maid, babysitter, etc. That in itself may pose other issues as well.

3 hours ago, Mr Walker said:

Again, it doesn't matter what a living thing  looks like. It is how the y think and behave which matters 

Yes and that would be one how thinks that way out of 8 billion what do they think because it will have to deal with them. It's like racism, you personally have no experience with it so you can understand what racism is but you cannot know what it is, I have known nice guys that got a beat down because they were nice guys/easy prey humans treat humans both good and bad and what expectation should I have of AI being treated any different?

The you was a universal you and not a you you so don't take it that way and don't make it personal.:tu:

  • Like 1
Link to comment
Share on other sites

6 hours ago, jmccr8 said:

Hi Walker

It doesn't matter what your ideal world is because that is not where what we are talking about is going to happen within the parameters of this discussion so lets stick with how people that live in the real world will deal with this issue as it unfolds.

 Well go put a cape on and save the world but it still has nothing to do with what we are taking about none of us are in this thread to change the world we are having a coffee and discussing a subject.

Maybe, we don't know and that is what we are talking about.

I wouldn't say no boundaries what do you think I meant when I I was saying  we weren't talking about your ideal world.:huh:

No you said people didn't want them to look too human and I gave you a response that you are now being responded to about again. It was the first time in this thread that it has been brought up.

What is? I have no idea what you are specifically responding to, if you are not going to be more specific in what you are responding to I will ignore it as pointless.

A bike, a car, a skateboard is more efficient than me for getting places faster should I give it special consideration for being a tool?

Just because you have a particular point of view does not mean that it is a common belief, Most of us are not discussing this from a biased pov and are discussing this as a subjective impersonal topic.

You don't really get it and perhaps I should have written a "cover all " post 

I see AIs developing into what humans are today 

Some will be like me, and some will be like you.

Thus you as a human and  I as a human will have equivalents in AIs 

Because of this both your nature and my nature are important in understanding  the  potential  nature of future self  aware Ais

   What a robot looks like is irrelevant just as what a human being looks like is irrelevant  it is about how they think and behave. Thats the only basis on which we can judge a human or an AI 

However a number of sources explain that humans dont always want robots to look like humans.

It is too creepy.

  For some purposes such as  nursing, aged care, or sex work   it might help for them to look human but that is not a very efficient design for most uses  

There is no connection between my last point and your response.

 

What defines a human is our cognitive alibies  Give another entity (animal or machine) the same abilities and we also have to give them the same rights a d responsibilities as adult humans,  no matter what the y look like 

A bike, a car, a skateboard is more efficient than me for getting places faster should I give it special consideration for being a tool?

Just because you have a particular point of view does not mean that it is a common belief, Most of us are not discussing this from a biased pov and are discussing this as a subjective impersonal topic.

A robot is a tool.

An AI with human  level self awareness is  a sentient self  aware being, identical in feelings and intelligence to a human being,  and yes it must ( and will ) have the same rights and responsibilities as a human being. Otherwise we are just creating a new set of slaves.    These issues are already being worked on by experts in a number of fields In preparation for when AIs become "human"

 

Link to comment
Share on other sites

4 hours ago, jmccr8 said:

Hi Walker

The fact that there are exceptions means there that sector exists and cannot be discounted in consideration or discussion of it's relative subject because it is a quality of the subject no matter how insignificant you may think it is. Slaves are illegal and yet women and children are sole every day so who is to say that an AI could/would be a property for some. People will steal anything so kidnapping AI'S and sell them  no different than stealing a car and sell it, some people likely will be able to reprogram how it will operate and be used to serve it's owner. There will be some elements of society that will see them as human no matter what the laws are just like the way they do with women and children now.

That may be a matter of perception that everyone does not share as there are 8 billion people with opinions and experiences  so yes you have an opinion and would like you to tell me why a body is inefficient as I have used mine very effectively for decades. for the things I've done for a living and for play. and after listening to and reading about those 2 AI doubt that they would have even tried to do many of the things I took chances on.:innocent::whistle:

They are not going to be superhuman in any sense and think you overstating some qualities a bit at this stage of AI integration, mostly they are in a box on or under a desk somewhere so they can talk the talk but not walk the walk yet. Robotics is advancing and  but nowhere near what we can do in the physical world as thing are unless I can say pass me a 9/16 and talks I'll likely use it for an intelligent bottle opener in the garage so we have a way to go yet.

The video said hate not me the reaction by people to certain things is real so  the form definitely would affect perception and how people react it was just an example some forms will integrate easier for a significant number. There are a lot of people who are scared of tech which is why conspiracy theories abound so there are many aspects of social evolution that will occur and I am not making forecast or promoting anything and posted links to give some idea of more than one group thinking. I don't know but as much as I love this world it is always in conflict so why would I think this transition will be like turning on a Mr. Rodgers show, no it will not be the same as buying a talking ipad with an avatar. There will be a gradual tech introduction with AI robots that will be introduced and doubt that just because it is a robot that it is sentient that robots that are not sentient or there programming allows them to interact with humans and are specifically built for service whether it if fabrication, construction, maid, babysitter, etc. That in itself may pose other issues as well.

Yes and that would be one how thinks that way out of 8 billion what do they think because it will have to deal with them. It's like racism, you personally have no experience with it so you can understand what racism is but you cannot know what it is, I have known nice guys that got a beat down because they were nice guys/easy prey humans treat humans both good and bad and what expectation should I have of AI being treated any different?

The you was a universal you and not a you you so don't take it that way and don't make it personal.:tu:

Yep slavery is illegal yet there are more slaves now than ever before.

That is not a reason  not  to  legislate for human rights for AIs 

BOTH sets of rights should be better enforced.

  Stealing an AI will be more like stealing or abducting a human than a machine once AIs have minds like humans

Actually an AI in an android body might prove harder to abduct or enslave than a woman or a child  if it decided to resist or wanted to remain free.  

Human bodies are not as efficient as machines  Hence the industrial and technological revolutions 

Indisputably,  within a couple of decades, machines will be smarter faster stronger and more powerful than human beings   

They are already being designed and used in warfare, and cyborg  replacements (exoskeletons)  for humans are already proving effective in military  wounded  

This source is already 4 years old.

  quote

Autonomous weapons systems and military robots are progressing from science fiction movies to designers’ drawing boards, to engineering laboratories, and to the battlefield. These machines have prompted a debate among military planners, roboticists, and ethicists about the development and deployment of weapons that can perform increasingly advanced functions, including targeting and application of force, with little or no human oversight.

Some military experts hold that autonomous weapons systems not only confer significant strategic and tactical advantages in the battleground but also that they are preferable on moral grounds to the use of human combatants. In contrast, critics hold that these weapons should be curbed, if not banned altogether, for a variety of moral and legal reasons. This article first reviews arguments by those who favor autonomous weapons systems and then by those who oppose them. Next, it discusses challenges to limiting and defining autonomous weapons. 

 

The Department of Defense’s Unmanned Systems Roadmap: 2007-2032 provides additional reasons for pursuing autonomous weapons systems. These include that robots are better suited than humans for “‘dull, dirty, or dangerous’ missions.”2 An example of a dull mission is long-duration sorties. An example of a dirty mission is one that exposes humans to potentially harmful radiological material. An example of a dangerous mission is explosive ordnance disposal. Maj. Jeffrey S. Thurnher, U.S. Army, adds, “[lethal autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed.

https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/

I should have explained that i realised it was the video not you  talking about hating  and yes hate comes from m fear and many people fear machines 

and yes the evolution will be gradual but rapid  A good example might be what will soon happen with Evs and robot driven vehicles The y will move from he unusual to the norm within a few years  Indeed by 2030 you might not be able to buy a new,  fossil  fuel, powered car 

Japan plans to overcome its labour shortage and aging population through robotics, in every sphere, from  health  and aged care to manufacturing  

A human being can both know and understand what racism is without ever experiencing it personally.

 That is the power of the human mind;  to imagine and feel empathy for others,  for example after reading a biography written by one, or an account of  what it was like to live as one.  

Edited by Mr Walker
Link to comment
Share on other sites

17 minutes ago, Mr Walker said:

I see AIs developing into what humans are today 

Hi Walker

Likely not all AI robots will be self aware if they are to be used in certain situations like a workforce it doesn't matter what they look like but they all have the same capacities as a self aware AI robot would you argue that we are denying the robots that are not programmed to be self aware the ability to be self aware simply because they have the same components that would allow it to be self aware if it was programmed to be so. Or that they should have the same components and all AI robots should be the same and given the same rights. Will that create a class distinction between self aware and non-self aware, would the AI robot think that it would be fair?

 

  • Like 2
Link to comment
Share on other sites

8 minutes ago, Mr Walker said:

That is not a reason  not  to  legislate for human rights for AIs 

No one myself included has argued against their rights if they are self aware so lets drop that, and if there is legislation there will be those that do not abide that is a known just like slavery.

11 minutes ago, Mr Walker said:

Stealing an AI will be more like stealing or abducting a human than a machine once AIs have minds like humans

Actually an AI in an android body might prove harder to abduct or enslave than a woman or a child  if it decided to resist or wanted to remain free.  

They run on batteries and have an on and off switch and once they are off they can be reprogrammed before they are turned back on and yes I used the term kidnapping when I made the comment but the person stealing it may not see it as a human so stealing is still a valid description. I would think that many parts of these robots will be 3D printed from light weight plastics and carbon fiber as they are not being built for combat and because they will be interacting with humans so don't need to be bullet proof or greater than average human strength.

Commercial units will likely have different parameters but nothing in the super human category. that you like to fantasize about.

22 minutes ago, Mr Walker said:

Human bodies are not as efficient as machines 

Right now they are because they have not yet been able to build a robot that exceeds or even comparable to normal human body function, we are in the present not the future.

24 minutes ago, Mr Walker said:

Indisputably,  within a couple of decades, machines will be smarter faster stronger and more powerful than human beings 

Maybe they will have some but they will not be the norm in every day use because they don't need to be given what purpose they are created for and if they are anything like the AI that wrote the essay it could be twice an Arnie and still still there twiddling it's thumbs on the sidelines watching humans kill each other thinking that's what humans do so why get involved. It's not just about how it looks it is about how it is programmed.

31 minutes ago, Mr Walker said:

They are already being designed and used in warfare, and cyborg  replacements (exoskeletons)  for humans are already proving effective in military  wounded  

Obviously they are not being designed to be fully sentient if they were they would have rights and need to sign up for active duty they are not the same type of robot that humans will deal with on a daily basis in urban life.

Exoskeletons are not AI or robots so that means nothing to this discussion.

35 minutes ago, Mr Walker said:

Japan plans to overcome its labour shortage and aging population through robotics, in every sphere, from  health  and aged care to manufacturing  

Yes and the majority will not be self aware, you can build 2 identical AI robots one is sentient and the other is not so where do we draw the line?

40 minutes ago, Mr Walker said:

A human being can both know and understand what racism is without ever experiencing it personally.

I'm going to call BS on that one.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, jmccr8 said:

It's not just about how it looks it is about how it is programmed.

I notice that you have great faith that "the robot was programmed to do X, therefore the robot will do X." Especially, the robot wil do only X, never anything more.

So, if a robot displays emotion, then we can infer that somebody programmmed the robot to make such displays and when it should make the display and how much display on each occasion (example: robot dog: owner comes home, robot dances and wags tail for 12.7 seconds). We would also infer that if we had access to the robot's source code, then we could find the relevant coded routines (what decides "owner comes home," what determines "12.7 seconds").

That ain't necessarily so. In connectionist AI (artificial "neural" networks, where ultimately the net's behavior is not determined by the architecture of the net but rather the current values of many, many variable parameters, "programming" = "training" the net to alter all those parameters) nobody did program the actual behavior in any detail. That is, the observed behavioral descriptive parameter "12.7 seconds" would probably not correspond to any specific stored parameter in the network.

Instead, the 12.7 seconds would reflect the fielded experience of the network, either being trained to that standard or else learned from "unsupervised" experience that this much display at that time fits in nicely with the rest of the network operating "well" (whatever that means for the specific net).

The behavior is still determined by the network's current state (the ensemble of all those paramter values right now), and in general terms determined by what the builder of the network originally had in mind at whatever level of abstraction ("I'd like something that might someday be mistaken for a dog"). But the actual behavior here and now is open-ended, depending on what happened to the pseudo-dog, both by intentional training and possibly by accident as it moved about in a haphazard world.

One way to say all that is that the system is open to the possibility of "emergent" behavior. Emergent is just jargon for "whatever happens in a designed system that wasn't foreseen by the designer." Strictly speaking, emergent behavior happens all the time, from freshman Introduction to Programming courses through Windows 11 ... usually "oh, crap!, why is it doing that?"

Although neural networks are a well known example of AI research that more or less courts emergent behavior intentionally, the basic idea of divorcing observed behavior from the explicit program appears elsewhere in AI. Inevitably so, since mathematically, neural nets and some other kinds of machine learning intersect (are isomorphic in math-speak - are interchangable, are two different ways of doing the corresponding things).

One AI community (so-called "constraint satisfaction") overtly articulates an emergent goal. In the ideal constraint satisfaction system, the user would specify the problem, say no more, and the machine would figure out the solution, gawd-only-knows how (search, actually, but a search whose progress has a logic apart from anything the programmer intended in any detail).

What got me started on this reply was history. This Alonzo Church I mention so often, the "Church" in Church-Turing, who's he? He's the creator of an abstract model of computation in general. His name is linked with Turing's because however different in form, Church's model and Turing's model have the same capabilities (= are isomorphic). So, the two men did some work together.

An early AI researcher, John McCarthy, reasoned "I can build a literal machine based on Turing's model, surely I could write a literal programming language to run on such a machine based on Church's model." IBM funded the project, and the result was LISP.

This original LISP treated its program and the program's data interchangably (as the Turing machine does). Thus, a program could in principle rewrite itself as it operated, the same as changing anything else in its memory. In robot terms, that rewrite might occur as it encountered new situations worth writing about (so to speak).

Another early AI programming language, PROLOG, also has a version of this program-rewriting-itself-as-it-runs capability. LISP, however, was full-tilt Bozo about it - the robot could rewrite LISP itself as it ran. (Thus in principle could rewrte a "program" by changing the meaning of the code the program was written in, and leaving the program's code unchanged.)

That LISP crashed and burned. If you're going to build a robot executive, that's a huge program. You need software engineering to do that, which means orderly development, and you need to do a lot of that development before the robot can go out and discover new things. Unfortunately, if the robot is rewriting its code as you're trying to develop the basic executive - well, good luck with that. Microsoft has yet to produce a version of Windows that works properly upon release, and that's with code which doesn't rewrite itself and the language it's written in.

BUT the principle is there. There's a dog I walk who, when I first met her as a puppy, retreated to her wire cage and just sat there. I did all the greeting rituals, Nada. She still just sat there in her protective cage, not even looking at me. I gave up. The owner points. I look. She had moved one paw, one inch, more-or-less in my direction. She let me touch the paw. No reaction, but she didn't retract the paw, either ... well, that's enough for one day. Fast forward to today, and we have a different problem: she jumps all over people: new people, people she already knows, dogs new and old, and people with dogs? Look out. We're working on that.

She has visibly rewritten her programming, or else changed a bucket load of parameters in her neural net, or else explored the logically possible potential soultion space and found a new local maximum of utility, or ... Damned if I know what she did exactly, but bigger than hell she did something.

Now if a hypothetical robot is really so much like a real dog, then I could be telling the identical story about one of those robots. I am morally certain that the real-world dog's behavior reflects "genuine emotions" on her part, based on some mix of "dog architecture" and her individual experience of a haphazard world.

The payload for all of this rocket

If suddenly I found out she was a robot, then I would at least conclude that she might have rewritten her programming in the senses I know for a fact can be achieved (because they have been achieved).

On what basis, then, would I deny the genuiness of the robot's "apparent" emotions when I cannot simply say "Oh, that's how she was programmed at the factory"?

And while we've had some discussion in the thread about developing a relationship with a known robot, what about the scenario where I've developed an emotional bond with this dog (I assure you that I have done so), and only then discover that she's a robot? What changes?

Would the pronouns suddenly go from she-her-hers to it-it-its? In my head maybe, but in my heart?

Edited by eight bits
  • Like 3
  • Thanks 2
Link to comment
Share on other sites

56 minutes ago, eight bits said:

Would the pronouns suddenly go from she-her-hers to it-it-its? In my head maybe, but in my heart?

 

Makes no difference, because it was never meant to be just a "dog" from the outset... The form was a distraction from the deflection ...

Quote
14 Oct 2021 — It seems the gun itself (dubbed the SPUR or “special purpose unmanned rifle”) is designed to be fitted onto a variety of robotic platforms. It ...
 
 
 
 
 
 
14 Oct 2021 — "The Sword Defense Systems SPUR is the future of unmanned weapon systems, and that future is now." It's unclear how autonomous a SPUR...

~

 

  • Like 3
Link to comment
Share on other sites

1 hour ago, eight bits said:

If suddenly I found out she was a robot, then I would at least conclude that she might have rewritten her prgramming in the senses I know for a fact can be achieved (because they have been achieved).

On what basis, then, would I deny the genuiness of the robot's "apparent" emotions when I cannot simply say "Oh, that's how she was programmed at the factory"?

I might evaluate this question along a different axis I think, and I keep coming back to emotions when thinking about some of these questions.  I understand that you are contrasting a situation where something is following programming as opposed to having the capability to rewrite its own, but to me the relevant question is whether we are assuming we have the technology to make robots feel.  In that scenario I don't think it matters to me whether it's responses were determined by its initial programming or something it has rewritten on its own, it still feels either way and its emotions are thus genuine.  If a dog breaks its leg it's an emergency, not just because of its response demonstrating it is hurt, but because of our empathy/desire to treat its suffering asap because we know it is in pain.  If a robot dog with no feelings breaks its leg and responds in an identical way to a dog, even if somehow this behavior is something it rewrote itself as opposed to being programmed, I don't think it would be or need to be treated with the same urgency and thus I wouldn't feel the same about it.

1 hour ago, eight bits said:

And while we've had some discussion about developing a relationship with a known robot, what about the scenario where I've developed an emotional bond with this dog (I assure that I have done so), and only then discover that she's a robot? What changes?

Good question, and this may be where the set programming vs emergent/rewriting programming set up matters.  I do think it helps retain the emotional bond if you can at least say that the robot dog is responding according to its own programming as opposed to just responding how someone else specifically programmed them to.  When my cat is on my lap and purring, that makes me feel good and strengthens the emotional bond because I know it is purring because it feels good/content. Even if a robot cat can't 'feel good', if it programmed itself to purr in identical situations to a real cat that might be providing enough to retain more of the emotional bond.  If its programming is set then it's essentially Tickle Me Elmo, and although you can have an emotional bond it's only one way.  

 

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

2 hours ago, Liquid Gardens said:

If a dog breaks its leg it's an emergency, not just because of its response demonstrating it is hurt, but because of our empathy/desire to treat its suffering asap because we know it is in pain.  If a robot dog with no feelings breaks its leg and responds in an identical way to a dog, even if somehow this behavior is something it rewrote itself as opposed to being programmed, I don't think it would be or need to be treated with the same urgency and thus I wouldn't feel the same about it.

I think that goes to the difference between forming a relationship with a robot knowing that it was a robot, and forming such a relationship while being convinced that she was a dog, and then learning no, she-it's a robot. (She it indeed :) ).

Actually, now that you mention leg breaking, that's a science fiction trope, right? The ultra-convincing robot is "outed" by some skin-rupturing injury, and the leg or whatever is revealed to have insulated wires inside and blinking LEDs's, etc.

OK, stay with me, then. I am playing fetch with my beloved, she trips on a chipmunk hole, she's down, I go over, and there's the broken-apart leg with wires and lights and whatnot. The rest of her is the very image of a dog in agony.

Sorry, dude. The image of a dog in agony needs my attention, now. Everything I know about this robot squares with what I know about dogs, and with what I believe about dogs, pleasure and pain. Maybe "it's all an act" playing out a script written by the factory, or by itself, or whatever*. Maybe it's not. I can't "unknow" what I know about possible technical achievements, and I don't have to assume that this is some breakthrough that was achieved but somehow I missed it. What I see could easily be what I've got: a sentient being in pain.

Keeping it simple: now knowing that she's a robot, and knowing that a veterinarian isn't going to fix this, I search for and find an on-off switch. Probably by probing around, finding something moves, and when it does, everything stops.

Do I leave her in the "off" conditon until I can (somehow) get the leg fixed, or do I turn her back on for a while first in order to admire the quality of its performance, perhaps looking for subtle differences between it and real canine pain I've seen?

You know the answer, and you have a pretty good idea of why.

----
* The historical stuff was entirely directed to a possible interpretation of Jay's posts, that "programmed expression" and "genuineness of feeling" were inherently incompatible. If the dogbot had convinced me, then it's all the same to me "what's under the hood" and who put it there.

I also admit that I'm probably easy to persuade, and if the model for robodog were the dog I mentioned, well, IRL, she had me at "hello."

1580699528_robotordog.jpg.78d3e8fb7d4d6aa3389415a3f42a8747.jpg

Edited by eight bits
  • Like 4
Link to comment
Share on other sites

1 hour ago, eight bits said:

Actually, now that you mention leg breaking, that's a science fiction trope, right?

Ha, yea just a slightly well worn one...

Capture.JPG.31a0cc272a892dcbc4819428308d9154.JPG

(actually the bottom two are more interesting, in that the 'robot' themselves didn't realize they are robots.)

1 hour ago, eight bits said:

Everything I know about this robot squares with what I know about dogs, and with what I believe about dogs, pleasure and pain. Maybe "it's all an act" playing out a script written by the factory, or by itself, or whatever*. Maybe it's not.

What I see could easily be what I've got: a sentient being in pain.

I think again I'd need clarification on what our technology is.  Sure, everything you know about the robot squares with what you know about dogs but of course all we can know is how it outwardly behaves; I think an equally valid consideration is whether this robot squares with what we know about robots.  If we're not being too futuristic then I don't know what you mean by 'maybe it's not' an act; are you suggesting that somehow the property of being able to feel pain has emerged from its programming? Since we are already aware of the physical components required for feeling in real dogs - nerves, brain chemicals, parts of brain devoted to processing these nerve signals - then it most certainly is 'an act' in relation to a real dog, and I don't see how it could 'easily' actually be in pain without the equivalent of Pinocchio's fairy godmother involved.  It's interesting to ponder how a robot AI dog would even make the determination to program itself to respond with let's say yelping to a broken leg, the only reason I can think of offhand is because it is mimicking the response of real dogs.

2 hours ago, eight bits said:

Do I leave her in the "off" conditon until I can (somehow) get the leg fixed, or do I turn her back on for a while first in order to admire the quality of its performance, perhaps looking for subtle differences between it and real canine pain I've seen?

You know the answer, and you have a pretty good idea of why.

Sans an equivalent of Data's emotion chip, the whys are very different.  You already know that I'm of the opinion that we do everything we do because of desire/increasing our pleasure, but that doesn't mean I don't believe in empathy.  When we take our real doggo with a broken leg to the vet and they are anesthetized/'turned off' we are doing that mostly for their sake; when we turn off the robot dog we are doing it entirely for ours, since the robot has no more 'sake' or 'care' or 'distress' or 'suffering' than a toaster.  I would turn off a robot dog that was yelping and acting like a dog in pain for the same reason I pull the batteries out of the smoke alarm when my awesome culinary skills result in too much smoke in the kitchen: it's distressing.  Why it's distressing to me is all about me and my personal emotional reaction, not a response to something actually suffering, my emotional response is a crossing of wires/reflexive in a way as I'm caninomorphisizing the robot doggo.

To continue this scenario would you think it is unethical for a doctor or scientist to take your robot dog with the broken leg and turn it on and analyze the quality of its performance, or is that unethical?  I'd guess you'd suspect it would be based on you thinking that it's possible it could actually be feeling pain.

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

6 hours ago, third_eye said:

Makes no difference, because it was never meant to be just a "dog" from the outset... The form was a distraction from the deflection ...

png-clipart-skynet-the-terminator-bulldo

  • Like 3
Link to comment
Share on other sites

Dog! Dog! Are you alright? 

Quote

 

[00.02:33]

~

Frankly speaking, I think Alyx and her dad had never laid eyes on a gorilla before... 

~

 

  • Haha 3
Link to comment
Share on other sites

3 hours ago, Liquid Gardens said:

To continue this scenario would you think it is unethical for a doctor or scientist to take your robot dog with the broken leg and turn it on and analyze the quality of its performance, or is that unethical?  I'd guess you'd suspect it would be based on you thinking that it's possible it could actually be feeling pain.

If they knew more about the situation than I did, and what they knew was that I had been fooled by a toaster with fur, then they can do what they want as far as I'm concerned. If they are as stumped as I am about what's really going on, then I would object to anybody turning her back on until the leg was fixed.

Recall that the scenario is that all the evidence available to me except the one latest observation is consistent with the robodog being at least sentient. The latest observation rules out her being a dog, with all that that entails both for my ideas about what it's "like to be her" and for the well-foundedness of our personal relationship, but it doesn't rule out her being sentient.

The original issue was how promptly you or I would attend to her distress compared with our probable haste in helping someone whose sentient status was not in any doubt. On the information available to me according to the hypothesis, I'm pretty sure I'd err on the side of caution and haul butt. If the new players have better information than I do, then what I'd support would depend on whether the better information makes it very unlikely that the robodog would be subjected to avoidable pain.

I agree that there is a secondary issue that the display itself is unpleasant, so maybe I wouldn't turn her back on even if I knew that it was only a toaster with fur. But if I didn't know that with great confidence, then I'd have an additional reason for not flipping the switch, one that would govern my choices even if I didn't have to watch or listen to her appear to suffer.

Edited by eight bits
  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.