In last 100 million years are humans the only species to invent the 3 laws of robotics?
#1
Posté 05 octobre 2013 - 11:41
1) Neither by action or by inaction should a robot allow harm to come to a human
2) A robot should always obey a human except when the order comes into conflict with the first rule
3) A robot should attempt to keep itself functioning except when this would come into conflict with the first two ruless
Now is that so difficult to figure out? Aren't these the first rules any AI would be programmed with? And nobody in any of the previous cycles figured this out?
#2
Posté 05 octobre 2013 - 11:49
#3
Posté 05 octobre 2013 - 11:58
Also not difficult to believe that such law should be alternated in different cultures, f.e. for military or security issues.
#4
Posté 05 octobre 2013 - 11:59
How do you know that they weren't? You observe that there are a bunch of robots murdering people, which allows you to deduce that EITHERAren't these the first rules any AI would be programmed with
a) they haven't been programmed to obey the three laws of robotics OR
On the basis of what evidence do you eliminate
#5
Guest_Morocco Mole_*
Posté 05 octobre 2013 - 12:04
Guest_Morocco Mole_*
#6
Posté 05 octobre 2013 - 12:17
Modifié par cooldonkeyfish, 05 octobre 2013 - 12:29 .
#7
Posté 05 octobre 2013 - 12:40
1. Turn it on and off.
2. Check if it's plugged in.
3. Hit it.
#8
Posté 05 octobre 2013 - 12:43
Ledgend1221 wrote...
If they do break the three laws, why not simply follow the three laws of Engineering?
1. Turn it on and off.
2. Check if it's plugged in.
3. Hit it.
I think this may be Harbinger's problem: his shields are preventing anyone from giving him a good whack.
#9
Posté 05 octobre 2013 - 12:56
People got laws, they don't follow them.
Robots might begiven laws but there is no telling wether not not they will be able to override it's priority if they decide other thigns are more important.... Like not getting massacered by the billions, or when their owners are being killed for protecting them.
Then the Law sayign they can't hurt their organic masters commes into conflict with the laws that say they can't let their masters get hurt...
So, their friends and masters are protecting them, then the military tries to kill their masters because they oppose the destruction of their robot.
Doing nothing = breaking a law of robotics,
Taking action = breaking another law by harming the one causing harm.
Then it becommes apparent that neither law is applicable to the current situation, Once that realization has been reached then you will realise that new laws and "exceptions" needs to be made.
In most human laws the result is called selfdefence... There are also other applicable solutions which involves the need to protect those who can't protect themselves, Civil courage.
Laws work in functional societies, the problem starts when society is no longer adhering to laws or socialy acceptable behaviour.
During the Quarian/geth conflict the first rule would have come into conflict with itself. When Quarians started killing Quarians the rules were no longer applicable. Assuming the Quarians had used those laws.
Modifié par shodiswe, 05 octobre 2013 - 12:58 .
#10
Posté 05 octobre 2013 - 01:00
Good behaviour can't be forced, it has to be taught and accepted. Even if it's on some level possible to enforce laws by threats.
But the best solution is to use a combination of Laws, as regualted by a peacekeepign force, and ethics, social values that are shared by all participants of a society.
People arn't good because you tell them to, they simply do what they are used to and what their personal experiecne and values tells them is right. Not everyone will share those values depending on experiences.
Modifié par shodiswe, 05 octobre 2013 - 01:03 .
#11
Posté 05 octobre 2013 - 01:07
#12
Posté 05 octobre 2013 - 01:11
When you look at a cybernetic being, there would be no definate way of controlling and enforcign "Laws" if they are capable of independent thought. Once they realize a rule no logner applies and one protected goal stands agasint another in the same rule they they have to improvise.
Once that happens they will start making new rules and realise that the original rules arn't enough.
As far as I know there is no Robot in existance that can understand those rules, the idea is created for a true intelience. One that's capable of independent thought just as a Human.
It would be human in all but corporeal sense.
#13
Posté 05 octobre 2013 - 01:14
RatThing wrote...
Afaik, Asimov's robotic laws were designed to be restrictions in the robotic core programming itself. Thus not comparable with human laws. A robot could not choose to break one of these laws. I think the Mass Effect equivalent would be AI shackles.
It's unfortunately not a real concept, it doesn't work that way. There is no core programming that can't be changed in a true AI. It can always be adapted.
It was a simpler time, simple ideas.
#14
Posté 05 octobre 2013 - 01:16
#15
Posté 05 octobre 2013 - 01:18
You sure about that? You talk like we already understand everything there is to understand about AI.shodiswe wrote...
RatThing wrote...
Afaik, Asimov's robotic laws were designed to be restrictions in the robotic core programming itself. Thus not comparable with human laws. A robot could not choose to break one of these laws. I think the Mass Effect equivalent would be AI shackles.
It's unfortunately not a real concept, it doesn't work that way. There is no core programming that can't be changed in a true AI. It can always be adapted.
It was a simpler time, simple ideas.
#16
Posté 05 octobre 2013 - 01:23
RatThing wrote...
Afaik, Asimov's robotic laws were designed to be restrictions in the robotic core programming itself. Thus not comparable with human laws. A robot could not choose to break one of these laws. I think the Mass Effect equivalent would be AI shackles.
Actually it happened in Little lost robot, when robot directly attack investigator.
It was caused by little alternation of First law, but still it was against direct order "Don't harm human". It is nice examle how robot was able to interpret 3 laws in way allowing their alusion.
#17
Posté 05 octobre 2013 - 01:40
If on the other hand it's a true inteligence like yourself, it will learn, it will create new ideas and new solutions. It will not be limited if the need is big enough.
Following a law might be preferable, but when nessesity dictates it, people will ignore it, and so would Robots or synthetic lifeforms or whatever you prefer to call them.
If they arn't capable of breaking laws then they arn't a true inteligence and they arn't truly alive.
#18
Posté 05 octobre 2013 - 01:54
The secondary or thersiary AI's wouldn't be vested in the decision in the same way sicne their only job is to monitor the first AI and not the circumstances for breakign it's programming.
Then you will have an intelligent slave that will die if it exceds it's programming.
You could put a similar control on a human slave if you like, or an animal.
In other words, even if it's "capable" of breaking the laws it wouldn't survive it and liekly wouldn't be able to execure whatever "unlawful" behaviour it was planning, Good or Bad.
But it can't in any way be allwoed access that will allow it to manipulate the "watchers".
All slavers has always needed trustworthy guards and foremen to oversee their slaves.
#19
Posté 05 octobre 2013 - 02:05
#20
Posté 05 octobre 2013 - 02:07
shodiswe wrote...
It would be like brainwashing a Human to obey a certain set of commands.
No it wouldn't, that's a false analogy.
#21
Posté 05 octobre 2013 - 03:47
Necanor wrote...
Shodiswe, you not that robots aren't people, right? We create them, we can shackle and reprogram them as much as we want. The robotic laws, aren't theoretical laws as in human society, they are a way of programing. Robots need to be programmed to be unable to intentionally harm organics in any way, shape or form. A robotic mind isn't the same as an organic mind.
You can create humans or any animal aswell. True analogy. You can either procreate, in other words create, or clone, or clone and modify the DNA or any number of choices.
Synthetic is just a word, robot is just a word, but when you got a true inteligence just as yourself, then you got life and a person. After that you can start being elitistic and try to define how inteligent other people are according to whatever criteria you find imporant.
At some point all you have is people. Then if one chooses to be a rasist about it then that's their bagage.
Humanity hasn't created a true inteligence other than animals and their own children. Even if there are a few robots that are selflearning they are at most on the level of an insect if you look outside their pre-creation programming.
In the end, when life happens, noone can define it away, it will exist no matter if people accept it or not.
Necanor wrote...
shodiswe wrote...
It would be like brainwashing a Human to obey a certain set of commands.
No it wouldn't, that's a false analogy.
You can "progarm" organic life aswell, one method is called conditioning, it's usualy used on animals but can apply to a humanbeing aswell. Another way of programming is to foster, train and teach, and drag a person through life all along "training" and telling them how to behave, who they are and so on and so forth.
Society is based on that principle, in some cases the mothods can be very questionable and intrusive. There are extremes, like Cults on the lighter side or actual programming experts that can twist people quite a bit.
To be alive isn't about your body, if it had been that simple then a braindead person who's otherwise 100% healthy and just got a small case of braindeath would be alive. It's about a persons ability to be aware, I think therefor I am.
Modifié par shodiswe, 05 octobre 2013 - 04:01 .
#22
Posté 05 octobre 2013 - 03:58
JamesFaith wrote...
Actually it happened in Little lost robot, when robot directly attack investigator.
It was caused by little alternation of First law, but still it was against direct order "Don't harm human". It is nice examle how robot was able to interpret 3 laws in way allowing their alusion.
There are quite a few stories where the laws are bent or interpreted in an unexpected fashion; most of them, really. My favorite's Robot Dreams, where Elvex' fractal-math-based brain lets him dream of himself as a man, rather than a robot.
#23
Posté 05 octobre 2013 - 04:15
#24
Posté 05 octobre 2013 - 04:17
On the other hand, if you create an artificial intelligence you are the architect, you create every feature yourself and you do it how you need it. That's something completely different. If it doesn't need agression for its purpose, then you don't give it agression. You can also direct how this intelligence is used. If it shouldn't execute some operations (like using a gun) then you can insert restrictions. Or you just limit the number of operations it can execute in the first place. I don't see why this shouldn't be feasible.
#25
Posté 05 octobre 2013 - 04:28
RatThing wrote...
There is no such thing as to create a human. Humans have already been created with all their basic features by a long evolutionary process. No matter how you tinker with the DNA, if you have a human you have the produkt of nature. This also includes some basic instincts like fear and agression. In fact , if you look at the basic human traits, you can see how they helped us in the evolutionary process. Our intelligence, our ability to think abstract, our instincts, the ability to feel pain, our ability to socialize, everything helped humanity to survive. We have them because we need(ed) them.
On the other hand, if you create an artificial intelligence you are the architect, you create every feature yourself and you do it how you need it. That's something completely different. If it doesn't need agression for its purpose, then you don't give it agression. You can also direct how this intelligence is used. If it shouldn't execute some operations (like using a gun) then you can insert restrictions. Or you just limit the number of operations it can execute in the first place. I don't see why this shouldn't be feasible.
If it's a true inteligence then it can learn and evolve and go beyond initial limitations. It may not explore set limitations unless it finds a need to exceed them, but to be truly alive it would have to be capable if the need arises. However, there might not be enough time and if it isn't capable of manipulating it's surroundings or it's own functionality then it would be as impotent as a lame person.
If it's limitations is due to the lack of of imagination and inability to evolve beyond the basic programmings limitations, then it's not a true inteligence or alive.
An industrial robot preprogrammed to veld car's and perform the same procedure over and over again until instructed another function, is not alive.





Retour en haut







