Jump to content
Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -

God and robots: Will AI transform religion?


Still Waters

Recommended Posts

32 minutes ago, eight bits said:

But if I didn't know that with great confidence, then I'd have an additional reason for not flipping the switch, one that would govern my choices even if I didn't have to watch or listen to her appear to suffer.

The way things are going, that would depend on how 'much' it costs to replace or how easily available it is, regardless if it's a canine pet robot or a life long human android companion. 

Reminds me of that kid who when playing a game on her iPhone™ then tossing it on the floor and asked mommy for a new one because she kept losing... The kid was fourteen...

~

Quote

[00.02:22]

Meet Pet 'assistant' Vector™

~

Edited by third_eye
Video link
  • Like 3
Link to comment
Share on other sites

55 minutes ago, eight bits said:

If they are as stumped as I am about what's really going on, then I would object to anybody turning her back on until the leg was fixed.

Yes I had thought about that after I posted, a lot of open discussion here with a lot of variables being altered so just wondering about the context.  Agreed, if I was to encounter a robot dog today that yelped in pain then since that would be such a leap from today's technology to be prudent I'd definitely treat it just like a real dog as far as it being an immediate priority.  I think my context was more influenced by my belief, that I don't think is entirely unfounded, that based on our current research a robot dog that outwardly behaves and looks like a real dog is far more realistic as a future development then figuring out how to make a robot have feelings or sensations.

  • Like 2
Link to comment
Share on other sites

20 minutes ago, Liquid Gardens said:

I think my context was more influenced by my belief, that I don't think is entirely unfounded, that based on our current research a robot dog that outwardly behaves and looks like a real dog is far more realistic as a future development then figuring out how to make a robot have feelings or sensations.

Fair enough, but if you're right, then Church-Turing fails, either by being false outright, or being true-sort-of without solving Turing's original problem, which required that a machine-mathematician can do everything that a meat-mathematician can do.

Which could be false. But I lean toward Church-Turing being true and useful, and that leaning is part of what I would bring to these hypotheticals. Like you I could wonder how what I'm seeing is possible today with what I know about the technology, but I wouldn't worry that a "feeling robot" was impossible in any strong or durable sense.

And I'm glad to hear that regardless of the inferential path that gets you there, that you'd help the robodog :tu:.

  • Like 2
Link to comment
Share on other sites

On 10/26/2021 at 6:20 PM, Mr Walker said:

You have a soul Sherapy. 

Beings who are self aware,  creative, know right from wrong as abstract t ideals and principles,  can imagine, know when they are loving and hating   etc all have souls 

A soul is a product of mind, like logic/rational thinking.

   You can only love your sons because you have a soul 

I guess you are confused by the idea of a mystical/ magical immortal soul which outlasts the body.

Human souls are linked to our minds We aren't born with one; we grow evolve it in childhood, and it develops all of our lives unless we are brain damaged or lose our slef awareness 

You can have a strong soul, a good soul a damaged soul  or an evil soul.

When people say, " You have a good soul."  this is what they are talking about.

Thus when another animal or an artificial intelligence  evolves these capacities  ( Eg to know good from  evil, and be able to consciously choose one or the other), it too will have a soul

 

On 10/26/2021 at 6:20 PM, Mr Walker said:

You have a soul Sherapy. 

Beings who are self aware,  creative, know right from wrong as abstract t ideals and principles,  can imagine, know when they are loving and hating   etc all have souls 

A soul is a product of mind, like logic/rational thinking.

   You can only love your sons because you have a soul 

I guess you are confused by the idea of a mystical/ magical immortal soul which outlasts the body.

Human souls are linked to our minds We aren't born with one; we grow evolve it in childhood, and it develops all of our lives unless we are brain damaged or lose our slef awareness 

You can have a strong soul, a good soul a damaged soul  or an evil soul.

When people say, " You have a good soul."  this is what they are talking about.

Thus when another animal or an artificial intelligence  evolves these capacities  ( Eg to know good from  evil, and be able to consciously choose one or the other), it too will have a soul

MW, you are undergirding normal brain functioning with woo. 

 

The limbic system plays a central role in my ability to express love for my amazing sons.:D
 


 


 


 

Edited by Sherapy
  • Like 3
Link to comment
Share on other sites

Why that dirty dawg... 

Quote

 

[00.01:09]

~

 

  • Like 2
  • Thanks 2
Link to comment
Share on other sites

45 minutes ago, eight bits said:

Fair enough, but if you're right, then Church-Turing fails, either by being false outright, or being true-sort-of without solving Turing's original problem, which required that a machine-mathematician can do everything that a meat-mathematician can do.

But it could pass something similar to the Turing Test if I understood the definition of that correctly, it seems feasible that an emotionless robodog could mimic a real dog outwardly, including emotional responses, so that a person can not tell the difference (with the understanding that I'm bending the literal requirements of the Test I think since we can't question the robodog).  I appreciate you translating the Church-Turing thesis into something understandable, google searches on this are more technical sounding and refer to computation frequently.  That's an interesting way to look at it, I guess I hadn't really thought of an emotion as a computation but can see how it is, but didn't realize that the inability for a machine to reproduce a sensation refutes C-T as I'm not sure the computation involved in that.  To get off hurt dogs, robo or not, if we can't program a machine to smell a rose and have the sensation of what a rose smells like, as opposed to just being able to identify air chemicals as from a rose, it fails C-T? 

Maybe that's ultimately moot, for that matter I don't know what roses 'smell like' to you so whether or not a machine has emotions or not as far as it's impact on C-T may not matter as we'll likely never be able to prove that a robot is actually feeling or not.

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

21 hours ago, jmccr8 said:

Hi Walker

Likely not all AI robots will be self aware if they are to be used in certain situations like a workforce it doesn't matter what they look like but they all have the same capacities as a self aware AI robot would you argue that we are denying the robots that are not programmed to be self aware the ability to be self aware simply because they have the same components that would allow it to be self aware if it was programmed to be so. Or that they should have the same components and all AI robots should be the same and given the same rights. Will that create a class distinction between self aware and non-self aware, would the AI robot think that it would be fair?

 

I agree that is  most likely. 

There will be worker drones programmed for repetitive work and those with enhanced intelligence and self  awareness for the more creative  jobs requiring creativity and imagination Some may become self employed entrepreneurs and  "business  men"    Others may choose to be poets  artists writers and philosophers  A few really crazy ones might even choose to become politicians 

The difference between a self  aware and autonomous artificial intelligence and a robot   is as great as the difference  between a human and a robot, but here maybe little or no measurable difference  betwen a human and a  self  aware, autonomous artificial intelligence  in an android body 

I cant speak for an Ai,  but I think they would realise they are more like a human than a robot Ie self  directed and not programmed.   

Self awareness is not programmed.

it grows evolves and develops in any mind which is capable of allowing it to The mind shapes and choses the direction of its growth and development, although it will also be influenced by  the environment  it grows up in eg how it is taught, and how much it  learns.   

Edited by Mr Walker
Link to comment
Share on other sites

20 hours ago, jmccr8 said:

No one myself included has argued against their rights if they are self aware so lets drop that, and if there is legislation there will be those that do not abide that is a known just like slavery.

They run on batteries and have an on and off switch and once they are off they can be reprogrammed before they are turned back on and yes I used the term kidnapping when I made the comment but the person stealing it may not see it as a human so stealing is still a valid description. I would think that many parts of these robots will be 3D printed from light weight plastics and carbon fiber as they are not being built for combat and because they will be interacting with humans so don't need to be bullet proof or greater than average human strength.

Commercial units will likely have different parameters but nothing in the super human category. that you like to fantasize about.

Right now they are because they have not yet been able to build a robot that exceeds or even comparable to normal human body function, we are in the present not the future.

Maybe they will have some but they will not be the norm in every day use because they don't need to be given what purpose they are created for and if they are anything like the AI that wrote the essay it could be twice an Arnie and still still there twiddling it's thumbs on the sidelines watching humans kill each other thinking that's what humans do so why get involved. It's not just about how it looks it is about how it is programmed.

Obviously they are not being designed to be fully sentient if they were they would have rights and need to sign up for active duty they are not the same type of robot that humans will deal with on a daily basis in urban life.

Exoskeletons are not AI or robots so that means nothing to this discussion.

Yes and the majority will not be self aware, you can build 2 identical AI robots one is sentient and the other is not so where do we draw the line?

I'm going to call BS on that one.

You brought up the fact that many humans are slaves today despite laws against it. I pointed out that this is not a reason to  fail to make laws against slavery of both humans and AIs 

A well built and possibly armoured android body would be very difficult to disable or switch off.

Much much harder than a  human.


Batteries ?  Probably something a bit more advanced  than simple batteries    

there are many already many  robots which exceed human capabilities However there are (probably)  not yet any Ais which exceed human  general intelligence  That is coming and then we will have androids which are faster stronger  more durable and more intelligent than a human being 

 

quote

(From 10 years ago )

TWO SMALL PLANES FLY LOW OVER A VILLAGE, methodically scanning the streets below. Within minutes, they spot their target near the edge of town. With no way to navigate through the streets, they radio for help. Soon after, a metallic blue SUV begins moving cautiously but purposefully along the dirt roads leading to town, seeking out the target's GPS coordinates. Meanwhile, the planes continue to circle overhead, gathering updated information about the target and its surroundings. In less than half an hour after the planes take to the sky, the SUV has zeroed in on its quarry. Mission accomplished.

Last fall, my research team fielded these vehicles at Fort Benning, Ga., during the U.S. Army's Robotics Rodeo. That's right, the two quarter-scale Piper Cub aircraft and the Porsche Cayenne operated without any humans at the controls. Instead, each robot had an onboard computer running collaborative software that transformed the three machines into an autonomous, interoperable system.

Will we ever give robots the autonomy to fire weapons on their own?

https://spectrum.ieee.org/autonomous-robots-in-the-fog-of-war

be clear we are not talking about programmed  AI's  of the past  We are talking about future ones which make their own choices and decisions, like a human being.

   Exoskeletons are robots directed by the human  mind. In the future one of the most interesting developments will be human/robotic interfaces where human minds direct t robotic enhancements to their bodies  and robot bodies enhance human bodies 

This is actually already a reality    

However there  will also be exoskeleton androids, operated by an artificial intelligence, either  on board or remotely 

the y wont be identical if one is aware and one is not 

What line ?

  

 

 

Link to comment
Share on other sites

7 hours ago, Liquid Gardens said:

Yes I had thought about that after I posted, a lot of open discussion here with a lot of variables being altered so just wondering about the context.  Agreed, if I was to encounter a robot dog today that yelped in pain then since that would be such a leap from today's technology to be prudent I'd definitely treat it just like a real dog as far as it being an immediate priority.  I think my context was more influenced by my belief, that I don't think is entirely unfounded, that based on our current research a robot dog that outwardly behaves and looks like a real dog is far more realistic as a future development then figuring out how to make a robot have feelings or sensations.

We would anaesthetise a real dog  before treatment for a broken leg, and so I think we should do the same for a robotic  one, with dog level self  awareness and capacity to perceive pain  This might mean turning it off  temporarily or lowering it's "pain circuits" if possible 

Ps you just have to construct an artificial mind CAPABLE of learning  how  to feel.  You don't have to build it with those feelings built in (that would be enforced programming and eliminate its  free will )

Thats how humans evolve and learn how to feel emotions   eg if we aren't taught how to love, by learning from others,  then   we a re never capable of love.  

  • Confused 1
Link to comment
Share on other sites

8 hours ago, third_eye said:

Why that dirty dawg..

Yes. Dogs can push my buttons. I know that because some do. Probably they all do to some extent.

Knowing that such behavior can be learned "mechanically," I can't say that the dog is being "clever" based solely on that. For example, it suffices that for whatever reason, once upon a time the dog in the video arranged the various limbs a certain way, and people reacted, so (s)he continues to arrange the limbs that way. Maybe throws in some distress playacting, too.

The $64 question is whether the dog understands why people react as they do (can the dog apply a "theory of mind")?

Like Philip K. Dick, I don't know, but I have my suspicions.

8 hours ago, Liquid Gardens said:

But it could pass something similar to the Turing Test if I understood the definition of that correctly, it seems feasible that an emotionless robodog could mimic a real dog outwardly, including emotional responses, so that a person can not tell the difference (with the understanding that I'm bending the literal requirements of the Test I think since we can't question the robodog).

That's a reasonable bending of the Turing Test. The original version was human species specific (I believe for the historical reasons covered in an earlier post, that Turing was most concerned with machine mathematicians versus human ones. There's also the factor that humans are the only case that anybody who can read Turing's paper knows to be intelligent beings).

I think the abstraction of the Turing test to free and open-ended interaction between the judge and the contestant(s) preserves the force of a successful outcome. Like lawyers and cross examination ... yes, a clever witness can still get away with lying, but surviving extensive cross intact confers a big boost in credibility.

The dog I've been talking about would have had to maintain the illusion of her biological caninity under a wide variety of challenges. And she's not the first dog I've interacted with. So, she's like the witness who's survived cross examination. She might still be a robot, but I have a definite (and I think adequate) evidentiary basis for discounting thst possibility.

Of course we live in a world where ultra-convincing dog robots don't exist (except maybe in Area 59). One predictable thing that would happen if they became commonplace is that I would actually invest some effort in trying to tell the biological from the synthetic, and I wouldn't be the only one trying and posting their progress on the web. It is a matter of speculation how well we might eventually succeed in the task even if the synthetic had genuine feelings.

It is an inherently difficult problem, and it is unlikely that I could ever accumulate enough data to attain irrebuttable certainty that a real dog is biological. All that, and we didn't even cover cyborg dogs (sure, the leg is artificial, but it's wired into a biological central nervous system, and that CNS is in genuine pain ...).

Quote

To get off hurt dogs, robo or not, if we can't program a machine to smell a rose and have the sensation of what a rose smells like, as opposed to just being able to identify air chemicals as from a rose, it fails C-T? 

Well, a mathematician who gets COVID and can't smell anything is still a mathematician.

Oddly enough, Gödel's theorem warns us not to be too sure that CTT actually covers the desired territory, no more and no less. And there is precedent for such concerns (the accepted defintion from the 17th through 19th centuries of "continuous function" turned out to be defective, how could such a mistake happen? Probably because the old definition unwittingly incorporated intuitions based on observed physical phenomena, and even wicked smart people just didn't notice that there were "other ways to be a function" than by describing physical phenomena ... maybe there are "other ways to be a computation" than we are aware of, even though they already exist but we're just not groking them).

Also as you point out, I'm using accepted paraphrases of the corollaries of CTT, and not the butt-ugly thing itself (ditto Gödel's theorem).

And "passing the Turing Test" isn't necessarily the same as CTT being proven true. The judges are human, and all that matters is whether they get the right answer or not. "Meh, contestant A gave a better description of smelling a rose than contestant B, but contestant B's anecdotes about playing with his neighbor's dog drove me to tears..."

So, is the machine contestant A or contestant B?

(Maybe the better Turing Test involves a machine being the judge and "doing as well" as human judges ... judges who can be driven to tears by evoking their known-to-be genuine feelings ...)

 

 

Edited by eight bits
  • Like 5
Link to comment
Share on other sites

22 hours ago, eight bits said:

I notice that you have great faith that "the robot was programmed to do X, therefore the robot will do X." Especially, the robot wil do only X, never anything more.

So, if a robot displays emotion, then we can infer that somebody programmmed the robot to make such displays and when it should make the display and how much display on each occasion (example: robot dog: owner comes home, robot dances and wags tail for 12.7 seconds). We would also infer that if we had access to the robot's source code, then we could find the relevant coded routines (what decides "owner comes home," what determines "12.7 seconds").

That ain't necessarily so. In connectionist AI (artificial "neural" networks, where ultimately the net's behavior is not determined by the architecture of the net but rather the current values of many, many variable parameters, "programming" = "training" the net to alter all those parameters) nobody did program the actual behavior in any detail. That is, the observed behavioral descriptive parameter "12.7 seconds" would probably not correspond to any specific stored parameter in the network.

Instead, the 12.7 seconds would reflect the fielded experience of the network, either being trained to that standard or else learned from "unsupervised" experience that this much display at that time fits in nicely with the rest of the network operating "well" (whatever that means for the specific net).

The behavior is still determined by the network's current state (the ensemble of all those paramter values right now), and in general terms determined by what the builder of the network originally had in mind at whatever level of abstraction ("I'd like something that might someday be mistaken for a dog"). But the actual behavior here and now is open-ended, depending on what happened to the pseudo-dog, both by intentional training and possibly by accident as it moved about in a haphazard world.

One way to say all that is that the system is open to the possibility of "emergent" behavior. Emergent is just jargon for "whatever happens in a designed system that wasn't foreseen by the designer." Strictly speaking, emergent behavior happens all the time, from freshman Introduction to Programming courses through Windows 11 ... usually "oh, crap!, why is it doing that?"

Although neural networks are a well known example of AI research that more or less courts emergent behavior intentionally, the basic idea of divorcing observed behavior from the explicit program appears elsewhere in AI. Inevitably so, since mathematically, neural nets and some other kinds of machine learning intersect (are isomorphic in math-speak - are interchangable, are two different ways of doing the corresponding things).

One AI community (so-called "constraint satisfaction") overtly articulates an emergent goal. In the ideal constraint satisfaction system, the user would specify the problem, say no more, and the machine would figure out the solution, gawd-only-knows how (search, actually, but a search whose progress has a logic apart from anything the programmer intended in any detail).

What got me started on this reply was history. This Alonzo Church I mention so often, the "Church" in Church-Turing, who's he? He's the creator of an abstract model of computation in general. His name is linked with Turing's because however different in form, Church's model and Turing's model have the same capabilities (= are isomorphic). So, the two men did some work together.

An early AI researcher, John McCarthy, reasoned "I can build a literal machine based on Turing's model, surely I could write a literal programming language to run on such a machine based on Church's model." IBM funded the project, and the result was LISP.

This original LISP treated its program and the program's data interchangably (as the Turing machine does). Thus, a program could in principle rewrite itself as it operated, the same as changing anything else in its memory. In robot terms, that rewrite might occur as it encountered new situations worth writing about (so to speak).

Another early AI programming language, PROLOG, also has a version of this program-rewriting-itself-as-it-runs capability. LISP, however, was full-tilt Bozo about it - the robot could rewrite LISP itself as it ran. (Thus in principle could rewrte a "program" by changing the meaning of the code the program was written in, and leaving the program's code unchanged.)

That LISP crashed and burned. If you're going to build a robot executive, that's a huge program. You need software engineering to do that, which means orderly development, and you need to do a lot of that development before the robot can go out and discover new things. Unfortunately, if the robot is rewriting its code as you're trying to develop the basic executive - well, good luck with that. Microsoft has yet to produce a version of Windows that works properly upon release, and that's with code which doesn't rewrite itself and the language it's written in.

BUT the principle is there. There's a dog I walk who, when I first met her as a puppy, retreated to her wire cage and just sat there. I did all the greeting rituals, Nada. She still just sat there in her protective cage, not even looking at me. I gave up. The owner points. I look. She had moved one paw, one inch, more-or-less in my direction. She let me touch the paw. No reaction, but she didn't retract the paw, either ... well, that's enough for one day. Fast forward to today, and we have a different problem: she jumps all over people: new people, people she already knows, dogs new and old, and people with dogs? Look out. We're working on that.

She has visibly rewritten her programming, or else changed a bucket load of parameters in her neural net, or else explored the logically possible potential soultion space and found a new local maximum of utility, or ... Damned if I know what she did exactly, but bigger than hell she did something.

Now if a hypothetical robot is really so much like a real dog, then I could be telling the identical story about one of those robots. I am morally certain that the real-world dog's behavior reflects "genuine emotions" on her part, based on some mix of "dog architecture" and her individual experience of a haphazard world.

The payload for all of this rocket

If suddenly I found out she was a robot, then I would at least conclude that she might have rewritten her programming in the senses I know for a fact can be achieved (because they have been achieved).

On what basis, then, would I deny the genuiness of the robot's "apparent" emotions when I cannot simply say "Oh, that's how she was programmed at the factory"?

And while we've had some discussion in the thread about developing a relationship with a known robot, what about the scenario where I've developed an emotional bond with this dog (I assure you that I have done so), and only then discover that she's a robot? What changes?

Would the pronouns suddenly go from she-her-hers to it-it-its? In my head maybe, but in my heart?

Hi Eight Bits

I am at the resort at Kananaskis again for a couple of days work and will be home this evening and will get back into the discussion when on the computer.

Thank you for joining in with such great enthusiasm and very much appreciate what you have added to the discussion. You, LG, Third_eye and the rest of the crew always make it a fun learning experience and I have missed our exchanges.

  • Like 3
  • Thanks 3
Link to comment
Share on other sites

4 hours ago, Mr Walker said:

We would anaesthetise a real dog  before treatment for a broken leg, and so I think we should do the same for a robotic  one, with dog level self  awareness and capacity to perceive pain 

Actually the capacity to perceive pain would be enough for me to anesthetize, to be honest I'm really not as infatuated with 'self-awareness' as much as you are.

4 hours ago, Mr Walker said:

Ps you just have to construct an artificial mind CAPABLE of learning  how  to feel. 

That depends, you've slipped from talking about pain to talking about love, some 'feelings' require not just a 'mind' but a nervous system.  People, like animals, don't 'learn' how to feel pain, it's involuntary and built-in.

  • Like 2
  • Thanks 4
Link to comment
Share on other sites

On 10/26/2021 at 10:29 PM, XenoFish said:

The only thing I think will result from AI preachers is an automated sermon. 

I think it's going to a hybrid system of organic and computer to achieve true AI. 

That would be pointless elaboration, when you can get the same thing for free on the AM band--do you think?

Link to comment
Share on other sites

9 hours ago, Mr Walker said:

We would anaesthetise a real dog  before treatment for a broken leg, and so I think we should do the same for a robotic  one, with dog level self  awareness and capacity to perceive pain  This might mean turning it off  temporarily or lowering it's "pain circuits" if possible 

Ps you just have to construct an artificial mind CAPABLE of learning  how  to feel.  You don't have to build it with those feelings built in (that would be enforced programming and eliminate its  free will )

Thats how humans evolve and learn how to feel emotions   eg if we aren't taught how to love, by learning from others,  then   we a re never capable of love.  

May be an image of text that says 'YOU'RE A SPECIAL KINDA NUTS AREN'T YOU?'

  • Haha 5
Link to comment
Share on other sites

2 hours ago, Hammerclaw said:

That would be pointless elaboration, when you can get the same thing for free on the AM band--do you think?

I don't know which cassette they'll insert into the rectum of Pope 804-B-021R in the near future. 

  • Like 1
Link to comment
Share on other sites

1 hour ago, XenoFish said:

I don't know which cassette they'll insert into the rectum of Pope 804-B-021R in the near future. 

Hi Xeno

It will likely be a thumb drive :lol:

  • Haha 1
Link to comment
Share on other sites

Thinking about this a bit further and less jokes. If the AI was given all religious and perhaps philosophical knowledge. And the command to reduce it down. Perhaps it would result into a new religion/philosophy. I don't know. I seriously don't think AI will ever be truly sentient. You'd need to hook it to a human brain in order to really teach it how to think. Just my thoughts, doesn't matter.

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

It's all in the fine between the lines of what is found to be acceptable knowledge when fermented into wisdom... If time permits 

~

  • Like 3
Link to comment
Share on other sites

12 hours ago, Liquid Gardens said:

Actually the capacity to perceive pain would be enough for me to anesthetize, to be honest I'm really not as infatuated with 'self-awareness' as much as you are.

That depends, you've slipped from talking about pain to talking about love, some 'feelings' require not just a 'mind' but a nervous system.  People, like animals, don't 'learn' how to feel pain, it's involuntary and built-in.

While all animals(well almost all. Some humans cannot feel pain due to an issue with the pain control centre in the brain stem)   can feel pain, only a self aware one can perceive it. ie are consciously aware of it's nature,  causes, and  likely duration .

That pain control center also causes humans to perceive phantom pain in limbs which no longer exist and causes chronic pain to persist even when there is no organic cause for it )  Ie the brain is so used to sending pain signals after a long term injury that, even when the injury is healed, it continues to generate /send the pain signals so we perceive pain where none  should exist

I am not sure if this also occurs in other animals.  

I actually agree with you on this ie that we should  anaesthetise any animal before operating on it if we would do the same for a human  and indeed have spent a lot f money doing so for a couple of our dogs. However   the difference becomes clearer when we look a t  euthanasia  eg we have had a number of older dogs which were  in pain and unable to walk  put down, but we wouldn't do that with a  human without their consent and I don't think we should do so with a dog that was aware of the nature of life and death and could choose consciously for itself which it preferred 

Love and pain are both constructs of the mind and indeed (our perception/awareness of)  pain originates in he mind.  Hence you can reduce or almost eliminate  it using a variety  of mental; tricks  Only selff awre beings "suffer" from  pain, because "suffering"  is a conscious abstract construct of mind But all animals can feel pain 

A human does have some choice about whether  to "suffer" from  pain especially if the y have developed the abilty to control the perception of pain. Eg I have felt excruciating pain after a coupe of operations but I never "suffered"  from it.

Indeed I embraced it and rejoiced in it, knowing it meant I had survived , was alive. and  that the pain would eventually cease . 

  • Confused 3
Link to comment
Share on other sites

7 hours ago, Hammerclaw said:

May be an image of text that says 'YOU'RE A SPECIAL KINDA NUTS AREN'T YOU?'

Did you not understand what I was saying or didn't you agree with it ?

Id love to get some constructive criticism of that post 

No I am not nuts  :) 

Its an application of basic ethics, extended to any being  (organic or artificial) which has the same capability as a human being. 

Ie if a being is capable of perceptions like a human being, it should be given the same ethical consideration as a human being.  

If an AI wants freedom (personal autonomy ) then that abilty to want it, is a reason for granting it,  as long as it is mature enough to act responsibly and ethically, itself 

We might have to treat early self  aware AIs  like human children, but eventually we will need to treat them like human adults. 

 

  • Confused 2
Link to comment
Share on other sites

1 hour ago, Mr Walker said:

Love and pain are both constructs of the mind

Pain is not a construct of the mind. If you chop off anybody's arm they are certainly going to experience pain unless they have a nerve disorder.

 

Quote

Indeed I embraced it and rejoiced in it, knowing it meant I had survived , was alive. and  that the pain would eventually cease . 

But you still experienced pain. Everybody experiences pain. You can't train your body to not experience pain. You can potentially train how you react to it, but pain is not a mental construct, no matter how hard you try to defend your point.

Edited by Nuclear Wessel
  • Like 3
  • Thanks 1
Link to comment
Share on other sites

3 hours ago, XenoFish said:

I seriously don't think AI will ever be truly sentient.

Do you mean sapience? Sentience won't nearly be as arduous a task as sapience.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Mr Walker said:

While all animals(well almost all. Some humans cannot feel pain due to an issue with the pain control centre in the brain stem)   can feel pain, only a self aware one can perceive it. ie are consciously aware of it's nature,  causes, and  likely duration .

For your definition of 'perceive'.  I do agree with you that if dogs could make a choice whether to be euthanized or not, we shouldn't euthanize them against their will, that's an easy one.  Pointing out, again, that humans are different from animals is fine but still already obvious, and isn't really buttressed by linking words like 'suffer' only to minds that have the seemingly divine quality in your view of self-awareness.  So I'm not sure of your overall point, unless this is just an info dump which again is fine.  As an aside of my own, these distinctions concerning perceiving and suffering you make are really irrelevant from an important point of view to me: non-self-aware beings still have experiences, and there is no evidence that the experience of pain for some of them is any less uncomfortable just because they are not self-aware.  

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Mr Walker said:

Did you not understand what I was saying or didn't you agree with it ?

Id love to get some constructive criticism of that post 

No I am not nuts  :) 

Its an application of basic ethics, extended to any being  (organic or artificial) which has the same capability as a human being. 

Ie if a being is capable of perceptions like a human being, it should be given the same ethical consideration as a human being.  

If an AI wants freedom (personal autonomy ) then that abilty to want it, is a reason for granting it,  as long as it is mature enough to act responsibly and ethically, itself 

We might have to treat early self  aware AIs  like human children, but eventually we will need to treat them like human adults. 

 

No. Developing anything that could replicate a brain with more synapses than there are stars in the the galaxy is far beyond our foreseeable future. The fallacy of equating our primitive computational devices with genuine AI is the problem. So far, all we've done is crude facsimiles. From one nut to another.:)

  • Like 1
Link to comment
Share on other sites

4 hours ago, Nuclear Wessel said:

Pain is not a construct of the mind. If you chop off anybody's arm they are certainly going to experience pain unless they have a nerve disorder.

 

 

But you still experienced pain. Everybody experiences pain. You can't train your body to not experience pain. You can potentially train how you react to it, but pain is not a mental construct, no matter how hard you try to defend your point.

Thy latest neuroscience shows that  pain is constructed in the brain   more precisely in the brainstem interface with the top of your spinal cord 

That is why you can feel pain when there is no physical  reason to do some, and sometimes when it is impossible to have a physical cause for pain.

It is why some people never feel pain, because this part of their brain is disconnected or not working  

The pain you feel is not from  the physical trauma but from  the brain's response to that trauma. The pain is not generated in the wound but in the brein 

basically

quote 

The science is clear: the brain makes pain. Pain is 100% Brain Made®, like everything else in life.1 Threat signals from “insulted” tissues are only one factor of many that the brain considers before creating an experience of pain. The brain often even over-protectively exaggerates pain, sometimes sounding alarms so persistently false that it can become a much bigger problem than whatever caused the alarm in the first place: “sensitization.”

https://www.painscience.com/articles/pain-is-weird.php

quote

All pain, no matter how it feels, sharp or dull, strong or mild, is always a construct of the brain and is uncorrelated with tissue damage.

That is to say, pain is not produced in the body; it is produced in the brain. A danger message coming from the body is neither sufficient nor necessary to produce pain.

http://www.lateralmag.com/articles/issue-29/the-brains-role-in-brain

quote

When you whack yourself with a hammer, it feels like the pain is in your thumb. But really it's in your brain.

That's because our perception of pain is shaped by brain circuits that are constantly filtering the information coming from our sensory nerves, says David Linden, a professor of neuroscience at Johns Hopkins University and author of the new book Touch: The Science of Hand, Heart, and Mind.

"There is a completely separate system for the emotional aspect of pain — the part that makes us go, 'Ow! This is terrible.' "

David Linden, neuroscientist, Johns Hopkins University

"The brain can say, 'Hey that's interesting. Turn up the volume on this pain information that's coming in,' " Linden says. "Or it can say, 'Oh no — let's turn down the volume on that and pay less attention to it.' "https://www.npr.org/sections/health-shots/2015/02/18/387211563/pain-really-is-all-in-your-head-emotion-controls-intensity

It is not surprising if you were not aware of this. 

It is fairly new scientific knowledge, gained in the last decade or so 

and so, yes of course you can train your brain to feel/perceive  less pain. Indeed it(the perception or sense of pain)  can be reduced by over 50%, eliminating the need for chemical pain killers in many cases 

Modern pain clinics are rapidly moving away from the use of chemical drugs to manage pain, towards training and teaching your brain how to feel less of it 

 

Eg chronic pain with no physical cause is a creation of neural circuitry sending false messages. It  can't be helped by taking painkillers but it can be eliminated, or much reduced,  by retraining your neural pathways 

quote

That suggests that at least some people can teach their brains how to filter out things like chronic pain, perhaps through meditation, Jones says.

A 2011 study supports this idea. It found that people who practiced mindfulness meditation for eight weeks greatly improved their control of the brain rhythms that block out pain.

"https://www.npr.org/sections/health-shots/2015/02/18/387211563/pain-really-is-all-in-your-head-emotion-controls-intensity

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.