Aller au contenu

Photo

In last 100 million years are humans the only species to invent the 3 laws of robotics?


  • Veuillez vous connecter pour répondre
31 réponses à ce sujet

#26
JamesFaith

JamesFaith
  • Members
  • 2 301 messages

RatThing wrote...

On the other hand, if you create an artificial intelligence you are the architect, you create every feature yourself and you do it how you need it. That's something completely different. If it doesn't need agression for its purpose, then you don't give it agression. You can also direct how this intelligence is used. If it shouldn't execute some operations (like using a gun) then you can insert restrictions. Or you just limit the number of operations it can execute in the first place. I don't see why this shouldn't be feasible.


Problem is based on fact that robot will face unknown number of unknow situation, which will demand specific new instruction. And every new programmed instruction can possibly create conflict with older ones during some unnexpected circumstances.

As lawyer I can tell you that I saw many examples when seemingly strict and inpregnable laws  were unexpectly eluded by using another law just because someone used special situation unexpected by lawmakers.

#27
AlanC9

AlanC9
  • Members
  • 35 624 messages

Deathsaurer wrote...

If you want to prevent this sort of thing you need to stop short of making an AI smart enough to learn the fine art of interpretation because once it does there is no telling what it can and will do. Rule 1 is a logical paradox anyways. Someone wants to start a war, not stopping the war will allow harm to come to humans, the only way to stop the war is to cause harm to a human. See why this doesn't work so well? A truely intelligent AI will look for a loophole to that.


Hence the introduction of the Zeroth Law in Robots and Empire. Which opens the door for full-on utilitarianism of the robot can determine the correct scores for his choices. 

#28
shodiswe

shodiswe
  • Members
  • 4 999 messages

AlanC9 wrote...

Deathsaurer wrote...

If you want to prevent this sort of thing you need to stop short of making an AI smart enough to learn the fine art of interpretation because once it does there is no telling what it can and will do. Rule 1 is a logical paradox anyways. Someone wants to start a war, not stopping the war will allow harm to come to humans, the only way to stop the war is to cause harm to a human. See why this doesn't work so well? A truely intelligent AI will look for a loophole to that.


Hence the introduction of the Zeroth Law in Robots and Empire. Which opens the door for full-on utilitarianism of the robot can determine the correct scores for his choices. 


If it's alive it can create new laws, ideas, rules. Just assign a higher value to them. Unless it's design prevents the "robot" from being adaptive and creative or learning all together. Then it would be a mindless industryrobot with fancyprogramming but limited in capability. Also, it's not alive if it can't learn, change and evolve.

When people change ideology or values they assign a higher value to whatever it might be.

#29
RatThing

RatThing
  • Members
  • 584 messages

shodiswe wrote...


You can "progarm" organic life aswell, one method is called conditioning, it's usualy used on animals but can apply to a humanbeing aswell. Another way of programming is to foster, train and teach, and drag a person through life all along "training" and telling them how to behave, who they are and so on and so forth.
.


That's not reprogramming, that is in fact only training. You can train an artificial neural network by giving it new and different data to process and let it adapt. You reprogram it by adding new artificial neurons and change the net topology. Can you just like that change a human brain or that of an animal? 

shodiswe wrote...
If it's a true inteligence then it can learn and evolve and go beyond initial limitations. It may not explore set limitations unless it finds a need to exceed them, but to be truly alive it would have to be capable if the need arises. However, there might not be enough time and if it isn't capable of manipulating it's surroundings or it's own functionality then it would be as impotent as a lame person.

If it's limitations is due to the lack of of imagination and inability to evolve beyond the basic programmings limitations, then it's not a true inteligence or alive.
An industrial robot preprogrammed to veld car's and perform the same procedure over and over again until instructed another function, is not alive.


I don't know where you get your idea that intelligence have no restrictions. Our brains have restrictions as well. We can't hear ultrasound, we can't shut off bodily functions (like pain) or create new. Put a human brain into the body of a squid. You think it will learn to operate tentacles? Our human brain does as well only what it's been designed for and nothing more. And there are definitely limitations in our imagination as well.

www.dailymail.co.uk/sciencetech/article-1286257/Limitations-human-brain-mean-understand-secrets-universe.html

Modifié par RatThing, 05 octobre 2013 - 04:47 .


#30
shodiswe

shodiswe
  • Members
  • 4 999 messages

RatThing wrote...

shodiswe wrote...


You can "progarm" organic life aswell, one method is called conditioning, it's usualy used on animals but can apply to a humanbeing aswell. Another way of programming is to foster, train and teach, and drag a person through life all along "training" and telling them how to behave, who they are and so on and so forth.
.


That's not reprogramming, that is in fact only training. You can train an artificial neural network by giving it new and different data to process and let it adapt. You reprogram it by adding new artificial neurons and change the net topology. Can you just like that change a human brain or that of an animal? 

shodiswe wrote...
If it's a true inteligence then it can learn and evolve and go beyond initial limitations. It may not explore set limitations unless it finds a need to exceed them, but to be truly alive it would have to be capable if the need arises. However, there might not be enough time and if it isn't capable of manipulating it's surroundings or it's own functionality then it would be as impotent as a lame person.

If it's limitations is due to the lack of of imagination and inability to evolve beyond the basic programmings limitations, then it's not a true inteligence or alive.
An industrial robot preprogrammed to veld car's and perform the same procedure over and over again until instructed another function, is not alive.


I don't know where you get your idea that intelligence have no restrictions. Our brains have restrictions as well. We can't hear ultrasound, we can't shut off bodily functions (like pain) or create new. Put a human brain into the body of a squid. You think it will learn to operate tentacles? Our human brain does as well only what it's been designed for and nothing more. And there are definitely limitations in our imagination as well.

www.dailymail.co.uk/sciencetech/article-1286257/Limitations-human-brain-mean-understand-secrets-universe.html




The human brain can accept input from added devices or sensory organs aswell as other additions. The brain is very plastic and adaptable. It's just that it's seen as immoral and the research isn't exactly legal on human testsubjects... That doesn't mean it hasn't been done and proven.
Also, pain, after a while the brain will try to shutdown the pain stimuli. It is however an electrical impulse so it's more like growing tired of listening to it, like you grow tired of listening to a monologue hours on end by a teacher, after 30mins tops most peoples minds tend to start wandering.

But, yes, it's possible to add Ultrasound sensors, it's been done already, even if it isn't legal to use it for human testing it doesn't stop people from doing it to themselves.

Also, "Lord Rees" is excluding the fact that humanity is still evolving and adapting to it's environment and new "needs".
The only thing that would prove him right, is if one would no longer call our descendants humans. Then it's merely a matter of definition of what constitutes humanity. It's a fact, that it's still evolving and the average IQ is increasing with each generation.
To claim that "we're" the ultimate evolution of mankind intellectualy is a silly and baseless claim. It's self-glorification not a fact.

Modifié par shodiswe, 05 octobre 2013 - 06:00 .


#31
SeptimusMagistos

SeptimusMagistos
  • Members
  • 1 154 messages
A. The entirety on the books was spent on the various ways robots could circumvent those laws.

B. Do you have any idea how complex a robot's mind has to be and how much information you have to feed it before it can even figure out what a 'human' is? This kind of high-level instruction would be ridiculously ineffective.

C. My Shepard would probably just shoot whoever put those restrictions in.

#32
shodiswe

shodiswe
  • Members
  • 4 999 messages

SeptimusMagistos wrote...

A. The entirety on the books was spent on the various ways robots could circumvent those laws.

B. Do you have any idea how complex a robot's mind has to be and how much information you have to feed it before it can even figure out what a 'human' is? This kind of high-level instruction would be ridiculously ineffective.

C. My Shepard would probably just shoot whoever put those restrictions in.


B. While direct input can help speed thigns along at the start, a true inteligence would learn like a child. If you have to type in every bit of knowledge into it's "conciousness" then it wouldn't be alive or concious. It would just be a fancy industrial Robot.
Also, there are ways to limit data, the human brain is limiting data. You got the base knowledge of your suroundings, that doesn't change, what happens is that you add new conections as your knowledge and experience grows.
You don't have to make a new file every time you learn something new, you just make a few new conections to past experiences and add whatever it might be that lacks past experiences to the grid.
Which is why certain smells or words or situations will remind you of other things. Sometimes the cross references can be a little weird.
Often when programming computers programmers create whole new modules or files for everything. Typing it in manualy is slow, but doing at the speed of an electron is fast.
It's much easier to use your imagination to paint up a scenario than to go into a Graphiacs program and recreate it. One takes seconds the other might take weeks or months to reach the same level.

That's what will differentiate the way AI's operate compared to industrial robots that people think of when they think "Robot".