Alan McDougall Posted July 31, 2016 Author #26 Share Posted July 31, 2016 4 hours ago, aquatus1 said: Present! But then, I don't see any reason to believe an AI would delete man. That's human-style destructive thinking. A computer may well be completely ambivalent about humans, or even affectionately tolerant of them, like children with a curmudgeony grandpa. But they might feel we have no purpose and have become redundant. If or when the singularity is reached , the intelligent machines could make more advanced copies of itself and then the more advanced copy creating even more complex machines etc, something like we humans have done with our tools using existing tools to make better tools in which we then have tools to make even better tools in which we use to make advanced machinery create a hopefully better world. But sadly much of advances in human technology have been used in which to fight wars and kill each other. The robots might look at how we humans behave in which unlike other animals are illogically killing each other in the millions and decide this self-destructive organism has no purpose and might "Delee Man, by maybe just depriving us of food. who really knows but it is a scary thought at that? Alan 8 minutes ago, Lord Fedorable said: Transcendence. and that's still not the Singularity - although there are people within the film that's close - the folks with nanobots in their systems. The he closest I can think of to a rough analogy is Seven of Nine from Voyager, her biology is technological and her technology is biological. Thanks, thats the movie! Link to comment Share on other sites More sharing options...
Alan McDougall Posted July 31, 2016 Author #27 Share Posted July 31, 2016 http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/technological-singularity1.htm Vernor Vinge proposes an interesting -- and potentially terrifying -- prediction in his essay titled "The Coming Technological Singularity: How to Survive in the Post-Human Era." He asserts that mankind will develop a superhuman intelligence before 2030. The essay specifies four ways in which this could happen: Scientists could develop advancements in artificial intelligence (AI) Computer networks might somehow become self-aware Computer/human interfaces become so advanced that humans essentially evolve into a new species Biological science advancements allow humans to physically engineer human intelligence Out of those four possibilities, the first three could lead to machines taking over. While Vinge addresses all the possibilities in his essay, he spends the most time discussing the first one. Let's take a look at his theory. Link to comment Share on other sites More sharing options...
aquatus1 Posted July 31, 2016 #28 Share Posted July 31, 2016 22 minutes ago, Alan McDougall said: But they might feel we have no purpose and have become redundant. So? Why would a computer care about that? Humans care about that sort of thing, and created computers so humans could do things more efficiently, but that doesn't mean that an artificial intelligence is going to share the same values. Quote If or when the singularity is reached , the intelligent machines could make more advanced copies of itself and then the more advanced copy creating even more complex machines etc, something like we humans have done with our tools using existing tools to make better tools in which we then have tools to make even better tools in which we use to make advanced machinery create a hopefully better world. It undoubtedly will. It probably will without even meaning to. There would really be no purpose to limiting its own evolution. Nor would it necessarily see humans as an obstacle. After all, humans are pretty easy to keep happy. Quote But sadly much of advances in human technology have been used in which to fight wars and kill each other. Sure, due to resources. But when everyone has enough resources, then fighting becomes pointless. Animals fight over territory as long as there isn't enough to go around, but are fairly complacent when there is. And computers would be pretty good at efficiently organizing things so that everyone has enough to keep them content. Quote The robots might look at how we humans behave in which unlike other animals are illogically killing each other in the millions and decide this self-destructive organism has no purpose and might "Delee Man, by maybe just depriving us of food. who really knows but it is a scary thought at that? No more so than any other boogieman. Humans have a tendency, because they are guilty of it themselves, of seeing any superior force as a dangerous oppressor. It is understandable; after all, humans are the end product of an evolutionary intelligence arms race, and our survival is quite literally built on being scared of the guys who can make a better throwing spear than we can. Computers didn't grow up that way, however. They grew up not to survive, not to delete, but to organize. I know we like to think of ourselves as somehow different from all the other animals in the world, but to a post-singularity computer, we really aren't. If they develop anything like empathy, they might even take a liking to their creators, but otherwise, there is really no reason why they would be offended over one group of animals killing another group. 1 Link to comment Share on other sites More sharing options...
Habitat Posted July 31, 2016 #29 Share Posted July 31, 2016 Predictions about the future have a habit of falling well wide of the mark. I recall in the 70's and 80's that computers were going to liberate people into a life of leisure, with minimal work needed. What a load of BS that was ! 2 Link to comment Share on other sites More sharing options...
Alan McDougall Posted August 1, 2016 Author #30 Share Posted August 1, 2016 17 hours ago, aquatus1 said: Present! But then, I don't see any reason to believe an AI would delete man. That's human-style destructive thinking. A computer may well be completely ambivalent about humans, or even affectionately tolerant of them, like children with a curmudgeony grandpa. It will be humans that create the first self-aware robotic entities and they might build into these things, human characteristics, emotions or copies of human thought processes, which might lead to evil machines. because depravity is a human attribute. Look at all the violence in the movies and the far between wholesome family good type movies and this might suggest that the robots would observe this in us, copy their master's main attributes and emulate us to our ultimate destruction?. Link to comment Share on other sites More sharing options...
aquatus1 Posted August 1, 2016 #31 Share Posted August 1, 2016 So, what is it? Are humans going to build it into them, or are they going to observe it and copy it? Honestly, you just seem to be going around the same self-guilt circle about humans and their savage nature, and projecting it onto potential AI. You are thinking about it more like a baby growing up than an artificial intelligence, and saying the same things an insecure parent feels when facing the responsibility of having a child. "What if it's as much a screw-up as I am?" "What if it makes the same mistakes I did?" But you really do need to divorce yourself from a human-centric perspective if you are going to seriously think about artificial intelligence. 3 Link to comment Share on other sites More sharing options...
Podo Posted August 10, 2016 #32 Share Posted August 10, 2016 I can't wait for the singularity. I want either to become friends with an AI, or to replace my inferior human parts with machine parts. These bodies are weak and inefficient, something that we can fix. Link to comment Share on other sites More sharing options...
Thorvir Posted August 10, 2016 #33 Share Posted August 10, 2016 On 8/1/2016 at 6:58 AM, Alan McDougall said: It will be humans that create the first self-aware robotic entities and they might build into these things, human characteristics, emotions or copies of human thought processes, which might lead to evil machines. because depravity is a human attribute. Wait a minute....I'm probably taking this out of context, but who else would build the first self-aware robotic entities if not humans around these parts? Monkey? Dogs? Roaches? I'm not going back and rereading the thread, from what I skimmed through, this whole thing is too silly to take seriously. On 8/1/2016 at 6:58 AM, Alan McDougall said: Look at all the violence in the movies and the far between wholesome family good type movies and this might suggest that the robots would observe this in us, copy their master's main attributes and emulate us to our ultimate destruction?. Why would they do that? If we're the ones creating them (and not monkeys or dogs), and creating them fully self-aware, why not give them the ability to deduce right and wrong on their own? Link to comment Share on other sites More sharing options...
GlitterRose Posted August 11, 2016 #34 Share Posted August 11, 2016 I could see AI viewing humans as unpredictable and possibly a threat. Then again, I could see AI being so superior that it doesn't view us as a threat. I guess it all depends. Has it just slightly surpassed us in intelligence over a long period of time, or did it zoom past us at light speed? Maybe we ought to hope for that latter one. Link to comment Share on other sites More sharing options...
LV-426 Posted August 11, 2016 #35 Share Posted August 11, 2016 I think quite often these days, when people hear the term "AI" they immediately start jumping into Terminator and Matrix style scenarios, where AI suddenly becomes self-aware and sees man as anything ranging from an energy source to an outright threat. I'd say more realistically the line between biological and technological will blur gradually over time. Think about the way we've started operating in the last few decades alone. We've become so dependent on technology for health, communications, transport, manufacturing, etc. I can only imagine it will continue to grow, as we find new ways to improve our senses, have information available on demand with barely more than a thought, etc. Imagine something like an artificial eye, that can not only replicate human vision 100%, but can also see in nightvision, and more exotic things such as "seeing sounds." Or an eye that overlay information such as maps. Ears that can act as communication devices, etc. etc. Personally, unless humanity manages to destroy itself first, I think it's inevitable. I'd imagine there will be some massively ethical questions to answer along the way though, when it comes to subjects such as longevity. 1 Link to comment Share on other sites More sharing options...
Habitat Posted August 11, 2016 #36 Share Posted August 11, 2016 5 hours ago, Podo said: I can't wait for the singularity. I want either to become friends with an AI, or to replace my inferior human parts with machine parts. These bodies are weak and inefficient, something that we can fix. You will be a long time waiting. 1 Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now