Aller au contenu

Photo

The Question of Synthetic Intelligence - with Rational Dentists, Ghosts in the Machine, Philosophical Zombies and Alan Turing chatting with computers


  • Veuillez vous connecter pour répondre
67 réponses à ce sujet

#51
Sigma Tauri

Sigma Tauri
  • Members
  • 2 675 messages

Darth Brotarian wrote...
Would it though? Such a being, deprived of such essentially needed stimuli and basically turned into a highly photosensitive organism without the ability to comprehend speech, understand physical contact, possibly be unable to move if the conditions were truely to only keep it barely alive(strapped to a bed with an IV would constitute such action), and would most likely have such severe brain deterioration and malnutrition as to be permenantly damaged, that I hesitate to even call such an unnatural and suffering filled exsistence living, much less call the creature in question human anymore.

And if I were allowed to express my opinion further, I would be willing to stipulate that if it were to be locked away for as long as seival suggested, that euthanasia* would be a much more merciful action then prolonged survival and the prospect of essentially having to keep it in its current state or risk killing it through pure bodily shock in attempts to "rehabilitate" it.


Yes, you still could call it human.What you described is like someone who is suffering late-stage dementia, and yet we extend the care and the rights that we believe he deserves. The issue of quality of life is still an on-going conversation in the ethics of health care, but euthanization is not often an acceptable answer. For example, are we really going to deny a client his personhood if he has HIV encephalopathy?

The case of Genie has always been what reminded me of why seival is wrong. It's been a long time when I saw that documentary, but if I recally, she adapted, but became so severely developmentally damaged that she's unable to function independently. This is why seival is wrong. That child is not an empty state.

Fact is, depending on the severity of any child's mental condition either by neglect, physical/sexual abuse, or a congenital or environmentally-caused neurological problem, they will always be limited by that disability. We however, don't treat them as animals, but chronically sick people. A lot of aspects of care for the mentally ill is that they don't know any better. A case like seival mentioned would've been taken to a children's specialized hospital for long term care, and it's not about rehabilitating them into normal people. That's impossible. Yet, it's in the philosophy of their care that we extend some degree of capacity because all human beings have that right.

*edit: I'm an idiot. The proper term is "euthanasia," not "euthanization."

Modifié par monkeycamoran, 27 décembre 2013 - 06:59 .


#52
Seival

Seival
  • Members
  • 5 294 messages

monkeycamoran wrote...

Darth Brotarian wrote...
Would it though? Such a being, deprived of such essentially needed stimuli and basically turned into a highly photosensitive organism without the ability to comprehend speech, understand physical contact, possibly be unable to move if the conditions were truely to only keep it barely alive(strapped to a bed with an IV would constitute such action), and would most likely have such severe brain deterioration and malnutrition as to be permenantly damaged, that I hesitate to even call such an unnatural and suffering filled exsistence living, much less call the creature in question human anymore.

And if I were allowed to express my opinion further, I would be willing to stipulate that if it were to be locked away for as long as seival suggested, that euthinization would be a much more merciful action then prolonged survival and the prospect of essentially having to keep it in its current state or risk killing it through pure bodily shock in attempts to "rehabilitate" it.


Yes, you still could call it human.What you described is like someone who is suffering late-stage dementia, and yet we extend the care and the rights that we believe he deserves. The issue of quality of life is still an on-going conversation in the ethics of health care, but euthanization is not often an acceptable answer. For example, are we really going to deny a client his personhood if he has HIV encephalopathy?

The case of Genie has always been what reminded me of why seival is wrong. It's been a long time when I saw that documentary, but if I recally, she adapted, but became so severely developmentally damaged that she's unable to function independently. This is why seival is wrong. That child is not an empty state.

Fact is, depending on the severity of any child's mental condition either by neglect, physical/sexual abuse, or a congenital or environmentally-caused neurological problem, they will always be limited by that disability. We however, don't treat them as animals, but chronically sick people. A lot of aspects of care for the mentally ill is that they don't know any better. A case like seival mentioned would've been taken to a children's specialized hospital for long term care, and it's not about rehabilitating them into normal people. That's impossible. Yet, it's in the philosophy of their care that we extend some degree of capacity because all human beings have that right.


Compared to fully functional and developed human Genie state just after release can be called an empty state. Quite close to what I discribed in my first post here...

...Which doesn't mean we can just dismiss her as a person - she was able to recieve required programming, she has got it at some capacity, and showed great progress. She still can learn even more, I believe - everything depends on her current "hardware capabilities", and people surrounding her. She can't develop herself properly without correct outer input, which proves my words to be true - "Society is a programmer. Events are the code. Person becomes a person not by him/herself, but by gaining all required algorithms from outside.". Which brings us back to my points - "any personality is a program" and "person can be called person no matter if the personality program is running on organic or synthetic hardware".

Modifié par Seival, 27 décembre 2013 - 05:42 .


#53
Sigma Tauri

Sigma Tauri
  • Members
  • 2 675 messages

Seival wrote...
Compared to fully functional and developed human Genie state just after release can be called an empty state. Quite close to what I discribed in my first post here...

...Which doesn't mean we can just dismiss her as a person - she was able to recieve required programming, she has got it at some capacity, and showed great progress. She still can learn even more, I believe - everything depends on her current "hardware capabilities", and people surrounding her. She can't develop herself properly without correct outer input, which proves my words to be true - "Society is a programmer. Events are the code. Person becomes a person not by him/herself, but by gaining all required algorithms from outside.". Which brings us back to my points - "any personality is a program" and "person can be called person no matter if the personality program is running on organic or synthetic hardware".


Again, she is not an empty state. The word you may be looking for is that she has yet to be socialized. That is different from being an empty state. Rather the term is developmentally delayed. She was able to walk, but barely, and she understood object permanence. That means there is neurological and cognitive development. Development doesn't work the way you describe. Infants reach milestones at specific points of their age. It's essential to foster a nurturing environment, which you describe as "programming", but that does not mean the brain is empty. Any delay on those milestones are pathological, and that's what's going on with Genie.

What you said about the importance of society is not wrong however. The limitations of what you said regarding her development is true. But, comparison to creating a socially interactive machine with psychosocial needs is simplistic. There is a lot to know about neurological, cognitive, behavioral, and psychosocial development that we cannot simply say that we can program them. That's high end technology that won't happen in a while. We don't know if we can create a human facsimile from an artificial mind, and their cognitive approaches could be absolutely different also.

Modifié par monkeycamoran, 27 décembre 2013 - 06:56 .


#54
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 283 messages
@Maria Caliban, thanks for interesting answers. I can agree with most of them.

I think Being sapient is more important than being sentient. If sentient means having simple perception and feelings, even mice are sentient. Not sure about the worms.
The second can only be accomplished with high development of mind... therefore we can think about very complex subjects and start changing our environment and society.
Apologize me if they have the same meaning... but I think they don't.

Modifié par Kaiser Arian, 27 décembre 2013 - 07:32 .


#55
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages
For the record, a child put into a dark box with only the basic neccesisities to survive would die. Humans are hard wired to need contact and stimulation. What you are talking about is sensory deprivation and it is a very effective, if brutal, form of torture.

That being said, many people make the very wrong assumption that "thinking" is the same as "having the same thought processes as a human." This is not the case. Nor should it be the (sole) goal of those who are trying to achieve Artificial Intelligence.

Humans are highly irrational beings that don't actually do a whole lot of cognitive break downs. We are creatures a of routine and habit - our thoughts are reactionary to our social programming and new stimulus. There is very little evaluation of the cognitive models with which we view and interact with the world, as well as very little storage of data and comparative analysis. We believe what we believe through small lines of logic, anecdotal evidence and pre-loaded conceptions.

If we are looking to build a machine that works like that, then it will be not at all useful.

A machine, on the other hand, has highly organized data storage and retrieval systems and is able to review its existing programs and data sets. It is able to tell you why it does what it does (although often it is not able to tell you why or why not its instructions don't work). A machine is much better at thinking that humans are, simply because humans aren't designed to think. We are designed to react, in a quick, oftentimes (in an evolutionary viewpoint) violent manner. Our brains are wired for flight-or-flight thoughts. Being able to create tools, using logic and experimentation, is a by-product of our need for increased memory from social groups, not from something we were built to do.

A machine simply lacks A) processing power, a limitation of technology that is rapidly closing, and B) ability to better interpret instructions, context, incoming data/stimuli and, in a nutshell, common sense.

Common sense is not the providence of humans. In fact, what many people consider to be common sense is very wrong. For instance, more people are afraid of public speaking than of death itself. There is no logic behind this - millions of people view the words, actions and behavior of others a day through TV, Internet and other media - yet many people are petrified that the words they say in front of even the smallest group will somehow be remembered and immortalized in such a way as to be damaging beyond repair.

Computers often don't have common sense because we haven't even begun to try to program the millions of little bits of experience the average human learns through their thousands of days days, tens of thousands of hours, collecting data from a baby to even the small age of six. Millions of minutes spent observing data on object recognition, speech patterns, problem solving, social awareness, development of empathy... the untold log list of becoming a functional human.

The difference is that while every human has to undergo this process as part of their development, a computer can observe and store these rules, absorb and understand this content, and then replicate it across others without any loss of data integrity and in a minimal amount of time.

A computer may never be able to learn what it truly means to be human... but once a decently diverse subset of machines learn to be close enough, then the obstacle is forever crossed for all future machines.

#56
Sc2mashimaro

Sc2mashimaro
  • Members
  • 874 messages

Fast Jimmy wrote...

That being said, many people make the very wrong assumption that "thinking" is the same as "having the same thought processes as a human." This is not the case. Nor should it be the (sole) goal of those who are trying to achieve Artificial Intelligence.


I agree with that. And I don't personally think an AI would ever be truly capable of "thinking like a human". There are advantages and disadvantages to that and, to be fair, I have a hard time "thinking like a dolphin" - which also has advantages and disadvantages.

The question is can a machine become "aware" or "conscious" and - additionally - how in the world would you go about proving it? The Turing test is approximately the same as "walks like a duck, talks like a duck, so I think it's a duck" while the "Turian Room" posits that other things could "walk like a duck and talk like a duck" without being a duck. Duck. Goose. Anyway...

I'm just glad this topic is back! :D

Criteria for considering an entity conscious and intelligent might include things like: learning, self-awareness, ability to communicate to other beings, recognition of other beings as conscious (though, if machines are rational like dentists are, maybe it wouldn't.....), ability to engage in the exchange of abstact ideas, ability to remember ideas/elements of previous communication and connect them to current communicative exchanges....others?

#57
eroeru

eroeru
  • Members
  • 3 269 messages
Why only abilities and functions? Why not what *actually* happens in such an entity and what the conscious widely *is/are* (certain chemical and physical processes happening to the corresponding substances)?

Modifié par eroeru, 29 décembre 2013 - 05:30 .


#58
JasonShepard

JasonShepard
  • Members
  • 1 466 messages

eroeru wrote...

Why only abilities and functions? Why not what *actually* happens in such an entity and what the conscious widely *is/are* (certain chemical and physical processes happening to the corresponding substances)?


Because when we're talking about computers, what 'actually happens' is the moving of transistors to open or closed. Everything else is abilities and functions built on top of that binary basis.

This will change when we can make stable large-scale quantum computers. (We can make tiny quantum computers, but they come with desk sized cooling equipment for just a couple of qubits.) And I don't think anyone has tried making a chemical computer - partly because chemistry is difficult to reset, energy expensive, and somewhat unpredictable.

#59
Obadiah

Obadiah
  • Members
  • 5 731 messages
Been reading up a bit on ethics and philosophy on wikipedia today, and it looks like Bioware's depictions of Synthetic AI is of beings of strict (or mostly) logical Consequentialism. That is, unlike most of us, that no action (pick your atrocity) is off the table if its result is considered beneficial.

This type of behavior is consistent with Legion's description of its moral dilemma when trying to determine what to do at Heretic Station. It only considers the outcomes. This would also explain why the Catalyst's can implement something as horrible as the Reaper cycles and still consider it "good."

EDI, if interacted with in cooperative way, shows a progression away from the harsher implications of this thinking: the organics would not want, or would suffer with certain actions if the goal was reached using specific actions, and that consequence is more "bad" compared to the "good" of the goal.

#60
DarkDragon777

DarkDragon777
  • Members
  • 1 956 messages
Freedom of choice exists within a limited scale in a small function of ourselves, but free will and souls are fiction.

#61
eternal_napalm

eternal_napalm
  • Members
  • 268 messages
I hope we create AI and so I can have them do my mundane daily tasks for me. Good machine. Like a washer or dryer.

#62
Vortex13

Vortex13
  • Members
  • 4 186 messages
 What about Robots with 'Soul'?

I believe that what is discussed in the video pertains to this thread. Socially 'inteligent' robots, are they a way to develop actuall AI?

I find it intersting that the social robots, while having less raw logic then their standard 'chessboard' bretheren, were prefered by the people in the experiments.

P.S. I want that lamp robot.

#63
zMataxa

zMataxa
  • Members
  • 694 messages

Fast Jimmy wrote...

The future will be intelligent machines. Whether purely organic humans will have a place in that future should be the true question the OP is asking.

_______

Agree 100%. 

For those curious about some perspectives on one significant contemporary state of development in AI, here is a very recent article of possible interest.
Titled:  Meet the Man Google Hired to Make AI a Reality
www.wired.com/wiredenterprise/2014/01/geoffrey-hinton-deep-learning/

Will be interesting to see if he can do it with just software and various emulation modes.

#64
DragonRacer

DragonRacer
  • Members
  • 10 041 messages
I am so glad I stumbled upon this discussion! It’s nice to see this subject being discussed rationally and calmly, with folks able to express their own opinions without being savagely attacked for being different from someone else’s. And it’s a subject that both fascinates me and frightens me. Not sure what I can add to it, not being a scientist or necessarily astute in sociology or other such fields, but here’s my own, personal thoughts on it.

Neofelis Nebulosa wrote...

People can't even define life or even existence. Hell, they can't even prove existence and by extension life, so how are we supposed to argue what's not life to start with?

First things first.


Neo’s post stood out to me because, as can be seen just in this thread, there is question as to what constitutes life and free will, whether or not souls exist, etc. People argue whether or not, if souls exist for humans, if animals possess such a thing. Some will say no while others, like myself, will interact with a creature every day and see it recognize me, see it express happiness at my presence, see it react as if ashamed when I scold it, see it interact with other humans and animals with varying degrees of personality (showing favoritism towards some and shyness to others) – and yet there will be those who say it has no soul, no sentience, that it is just a dumb animal. So, I think there are some definitions we may never truly agree upon, and that’s true of organic life now… so, turning the argument as to whether or not synthetics have life, soul, consciousness, whatever you want to label it, I’m not sure there will ever be a consensus reached on that when we have trouble reaching consensus on what’s already familiar to us (organics).

I will admit up front that I am a soft heart and have a bad habit of anthropomorphizing things. However, I don’t see that as necessarily a bad thing. What my philosophy boils down to is to treat everyone you personally encounter with respect and, if there is even the slightest chance that something can feel or be hurt, that you might as well treat it with respect as well. If you treat a neighbor’s dog respectfully (i.e. don’t try to frighten or hurt it), it may not matter one way or another to you or the dog, but what bad came from treating it well? None. Better safe than sorry sort of mentality, I suppose.

So, by that same token, I’ve pretty much always played a Shepard that was respectful towards Legion and, by extension, the Geth because they appear to possess a consciousness. It’s how I would honestly react to and interact with any sort of AI-seeming robot/android, even if it was meant to just be a tool (I’d still, of course, use it for its function, but there’s no harm in my being careful and gentle with it, as opposed to banging it around). Heck, I would probably be the same way even if it WASN’T humanoid (like a car), but exhibited signs of some type of consciousness.

If I were to go out to my Dodge Charger one day and it suddenly asked if it had a soul, I’d be amazed and thrilled, probably pat it on the hood, and tell it I think it does and I love it very much. LOL At the same time, if all cars suddenly did that, I could easily see humanity split between those with my mindset and those who’d be frightened/threatened by it and want all talking cars to be immediately crushed into little blocks of scrap metal lest they decide to start running us over in the streets. It’s also why I can’t fault the Quarians so much for their reaction – out of fear – to the Geths’ “eureka” moment of gaining sentience… we’d do absolutely the same thing if we were in the same place, with some defending their AIs and others wanting them immediately shut off. It’s why I go for peace every time because I respect both sides of the Morning War and why each reacted as they did.

But then…

Fast Jimmy wrote...
The future will be intelligent machines. Whether purely organic humans will have a place in that future should be the true question the OP is asking.


That concept also frightens me. Because I can see the potential Skynet/Terminator side where we are deemed as unnecessary, destructive, parasitic, or what-have-you and the machines rise up to purge us.

My hope would be of the EDI-variety, intelligent machines that have some capability of determining a “good” or “morality” that involves a symbiotic relationship with organics as preferable to war.

At any rate, I’ll just be over here, petting my car and telling it how much I love it in the hopes that I may be spared in the upcoming robotic apocalypse. I’d make a very good pet, really. Image IPB

#65
Obadiah

Obadiah
  • Members
  • 5 731 messages
I pulled up John Searle's argument on AI on wikipedia (this is the guy the made the Chinese Room argument), and it seems there is an interesting analysis of it:

One response is that Searle's argument is really an argument against Functionalism, and in favor of Dualism (egads, is everything already labelled?), that consciousness is not simply the perceived result of physical interactions in the brain, but a distinct physical property, and a simulation of a physical property no matter how good, is not the same as the actual physical property.

Modifié par Obadiah, 31 janvier 2014 - 03:08 .


#66
JasonShepard

JasonShepard
  • Members
  • 1 466 messages
Obadiah - Well, to use the technical terms, I think that places me opposed to Dualism and in favour of Functionalism - even though I don't have an easy answer to the Chinese Room.

The thing is that I don't believe in such a thing as an inherent consciousness, something separate from the rest of physics. I'll accept that it's a possibility, but if so then it's an unmeasurable object. And that isn't a scientifically useful possibility.

I do have an update to this thread planned (talking about the possibility of quantum free-will and quantum computers)... Unfortunately, university has started up again, so it's taking me a while to get around to writing it.

#67
Obadiah

Obadiah
  • Members
  • 5 731 messages
Anyone else seen the new Robocop? I think he looks a lot like Shepard in some funky ME3 armor.

Image IPB

One of the more interesting plot developments (and this is in one of the ads so it's not a huge spoiler) is that the human in the machine isn't always making decisions, but instead a computer is, and Murphy is tricked into believing he is, uh, "the decider". It is a little similar to what happens to Paul Greyson in Retribution.

*Spoiler*
Additionally, at one point, Murphy's emotions are removed to prevent him from having anxiety induced seizures, and for all practical purposes he turns into a synthetic/organic robot, and definitely not able to pass a Turing test.

Modifié par Obadiah, 14 février 2014 - 02:06 .


#68
mybudgee

mybudgee
  • Members
  • 23 037 messages
All-synthetic band about release an EP (true story) :P
http://www.youtube.c...kUq4sO4LQM#t=27