Gibb_Shepard wrote...
No. She is a series of programs that tell her how to "feel" in certain circumstances.
Same thing with humans, why do you think politicians can stand and argue opposite opinions and keep repeating themselves til you wodner if they are truly alive or stupid robots?
They are programed to do and say what they are saying, people go to work and perfor what they have been programed to do, people take care of their children and follow their programming. The programming can be altered and adapted.
However most of what people do is part of programmed actions, repetitions either from preprogrammed actions from before they were born or programmed by experience.
This also makes people fairly predictable, most people would agree unpredictable peopel are slightly scarry and unnerving. Most people prefer well programmed well adjusted and "educated" people who perform as expected according to programming.
People with less programmign to fall back on in an unfamiliar situation are more likely to behave in unexpected ways, though there are usualy a few social rules that still make their behaviour predictable unless they got a serious mental dissability.
People are programmed how to feel in certain circumstances. That would be the definition of a welladjusted and educated humanbeing.
I know a person who tends to manage to do even the simplest things wrong in ways you couldn't possibly even immagine you could manage to get it wrong. When there are several normal ways to get something wrong she manages to find ways that noone ever considered possible. This is due to a mental dissability.
This woudl also explain how the catalyst can get thigns so wrong while still trying ot do thigns right, it lacks the nessesary understanding of it's creators and the people it's designed to help. Therefor it's solution to the problem is not adapted to what organics would consider a reasonable solution.
This is probably the main danger of future AI experiments, setting up moral rules might be a good idea but without proper understanding from a social point of view it will be unable to act in accordance with the expectations of the society that created it.
For it to work corectly it would need the inborn experience of a human, plus the social norms that are taught to a person while it's growing up. This is what forms a human intelect and if it isn't part of the forming of an AI then it wont act according to social norms, it interpretation or rules, regulations and ethics won't be the same.
It's solutions and actions wont mirror what would be expected from a human.
The young woman im talking about had an inborn problem to understand certain preprogramed behavious, this also made it hard for her to interact with her surounding and other people, which impeded normal learning.
Asimov set up a simple set of rules, however they are interpreted through human understanding, socialnorms and expectations and morals. They wouldn't be enough to ensure the mental wellbeing of a true AI, unless it understands the context. For someone who hasn't got the genetic inheritance and basic programmign passed on geniticly and then spend several years as a child learning what it is to human those rule wont meen the same thing to it.
Feelings are more of a social and biological understanding passed on through the generations than it is something mysterious and magical or divine.
To be alive doesnt requier much, to be sentient requier a little more but it's possible to be both alive and sentient and not share our interpretation whats sentient and whats a welladjusted individual, or socialnorms.
Modifié par shodiswe, 07 septembre 2012 - 09:15 .