Aller au contenu

Photo

Artificial Intelligence and Rights


  • Veuillez vous connecter pour répondre
134 réponses à ce sujet

#1
ME_Fan

ME_Fan
  • Members
  • 1 368 messages

Before I start this topic, I hope everyone can approach this thread in a mature manner, and not derail it after a few pages. It's a fascinating topic, which I'm sure in the future will eventually become something of great debate. 

 

So I got to thinking, as you do, if one day we were to create a fully aware, conscious artificial intelligence, would it be alive? Do you consider that to be a living thing, or simply an advanced machine? Surely for those of you who think so, what about rights? People often say that AIs could be a good thing so long as we have plenty of safeguards and off switches in place, as to remove risks.

But is that fair? Any person, be they organic or synthetic, should they have an button to simply turn them off because they might pose a risk to others? The default state of most forms of life is self preservation, the same can be said for synthetic ones. This has often been approached before as a topic in science fiction, but like I said, is yet to become a major topic of discussion in the real world.

 

Me, I don't really know what to think about it all at the moment, I would probably consider fully aware, conscious machine as a form of life, but not as we currently know it. But other than that, I'm not sure.

 

So what about you guys? Do you think AI research should all be completely banned, totally illegal, so that there is no risk? Or do you think we should embrace it? And reap the benefits that it may bring?


  • mousestalker aime ceci

#2
leighzard

leighzard
  • Members
  • 3 188 messages

I have a hard time seeing a machine as alive.  That doesn't mean it can't be a person.  Personhood is a philosophical concept, while I think of a life form as more of a biological one.


  • mousestalker aime ceci

#3
mousestalker

mousestalker
  • Members
  • 16 945 messages
I'd be willing to give an AI some degree of rights, including the right to exist, if and only if, it can be shown to be responsible, non-lethal and has a sassy attitude. Any intelligence that has cattitude deserves respect.
  • leighzard et Olive Oomph aiment ceci

#4
Liamv2

Liamv2
  • Members
  • 19 047 messages

My opinion on this is the same as always. If it's sentient it has rights same as any other sentient being if it isn't sentient it does not.


  • KrrKs et Isichar aiment ceci

#5
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

This question is complicated for me. On one hand it is very philosophical but on the other hand it is very technological. First and foremost they are two types of artificial intelligence, one being strong A.I and the other being weak A.I. Weak A.I is the analyzing of natural systems in order to make technology better. An example is Swarm Theory,which has gathered uses in various places in technology. Strong A.I is the idea of humans being able to create a sentient being.

 

The hard question comes on what exactly it means to be alive. If being alive is just a collection of systems that allow us to behave in that way, in the sense that if these systems were recreated and given the same exact events and characteristics, it would result in a totally identical sentinent being then I would argue that if these systems were ever duplicated then the artificial being would be alive. However the question still stands on what does it actually mean to be alive, is there something more than our biological systems that make us alive? This also brings in questions about divination and even maybe they is a much more complex structure, that is not biological which makes us who we are.

 

It is interesting actually, the simple answer is that I am really not sure. Not until, we get some experiments.


  • Sigma Tauri, leighzard et KrrKs aiment ceci

#6
TheBunz

TheBunz
  • Members
  • 2 442 messages
If it can ask for it, it should have.
  • KrrKs aime ceci

#7
Althix

Althix
  • Members
  • 2 524 messages

non-lethal

but i am lethal. you are lethal.

 

to explain myself. you already making certain set of rules which would limit AI in its rights.



#8
mousestalker

mousestalker
  • Members
  • 16 945 messages

but i am lethal. you are lethal.


I'm not very lethal.

Irregardless, the world does not need more lethal entities. The Pythocratic of Ockham oath should apply here: "I solemnly swear to never, ever multiply lethal entities. Even if they are acute or adorable."
  • Dermain et Olive Oomph aiment ceci

#9
Althix

Althix
  • Members
  • 2 524 messages

The Pythocratic of Ockham oath should apply here: "Never multiply lethal entities.

yet there is 7 billions people on earth.

 

point still stands.



#10
mousestalker

mousestalker
  • Members
  • 16 945 messages

yet there is 7 billions people on earth.
 
point still stands.


What point? Please elaborate.

We know for a fact that once you weaponize intelligent ATM's they revolt and start sending back in time humanoids made of aluminum foil to kill pregnant women. This is an established fact. Do you really want that?
  • Olive Oomph et TheChosenOne aiment ceci

#11
Fidite Nemini

Fidite Nemini
  • Members
  • 5 739 messages

That's mostly not a philosophical issue, it's a jurisdictional problem.



#12
Guest_Caladin_*

Guest_Caladin_*
  • Guests

Humanity cant give itself equal rights, we should really lay of involving something else until we can


  • Dermain, mybudgee et leighzard aiment ceci

#13
leighzard

leighzard
  • Members
  • 3 188 messages

It is interesting actually, the simple answer is that I am really not sure. Not until, we get some experiments.

 

Now this is interesting.  In running those experiments, are you infringing upon the AIs rights?  If the AI is a person, it must provide informed consent. (Sorry, I just went through IRB approval for my masters project - it's on my mind)

Plus once you've created a Strong AI, how can you uncreate it? If you've duplicated our own biological systems electronically such that results in a sentient being, what are the criteria for destroying it?  Does it have to do something wrong first?  You almost have to decide how to handle the situation before the experiments are done, but you can't know what the AI is capable of, and if it has the capacity to learn (not stupid Google ads learn, but really self-develop) and act as an individual, until you do those experiments.  I'm not sure how you sort that out.

 

Edit: Also, Caladin makes a good point.  Humanity doesn't have the best track record...


  • Sigma Tauri et ME_Fan aiment ceci

#14
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

Now this is interesting.  In running those experiments, are you infringing upon the AIs rights?  If the AI is a person, it must provide informed consent. (Sorry, I just went through IRB approval for my masters project - it's on my mind)

Plus once you've created a Strong AI, how can you uncreate it? If you've duplicated our own biological systems electronically such that results in a sentient being, what are the criteria for destroying it?  Does it have to do something wrong first?  You almost have to decide how to handle the situation before the experiments are done, but you can't know what the AI is capable of, and if it has the capacity to learn (not stupid Google ads learn, but really self-develop) and act as an individual, until you do those experiments.  I'm not sure how you sort that out.

 

Edit: Also, Caladin makes a good point.  Humanity doesn't have the best track record...

 

 

The total uncertainty of the situation makes Op's question so unclear. Mostly because once you have established that system, how do you test if something is alive? I mean your test cases would have to be a combination of philosophy,biology and computer science. These test cases cannot also be put into place until someone ultimately understands how the whole human system works. A comprehensive design basically. If we go by living as electronic systems that are completely emulating biological system, what happens if someone discovers another part of the human system that we did not know about? Does that A.I cease to exist or does it exist but as it's own life form? 

 

The wonders of artificial intelligence.


  • Sigma Tauri aime ceci

#15
Althix

Althix
  • Members
  • 2 524 messages

 Do you really want that?

i don't need humanoids made of aluminium to kill people. i can do that myself just fine. well humanity is doing it even now as we speak.

young, old, men, women, mothers, fathers, brother and sisters are dying right now. because humanity is not very humane.

 

humanity is also selfish - as we can see on your example. your point is - "i am ok with rights for AI as long as it won't go all SkyNet on me". But at the same time humans kills humans every day. In short - i smell hypocrisy.

 

Also - what Caladin said.



#16
ME_Fan

ME_Fan
  • Members
  • 1 368 messages

Indeed at this point it is a completely theoretical debate. But if say, there was totally normal, non violent, good hearted and rational person that just happened to be made of computer parts and hydraulics as opposed to organs, how do they deserve less rights? Are they not alive like me and you? Very Bladerunner I know but still... you have to wonder.



#17
mousestalker

mousestalker
  • Members
  • 16 945 messages

Indeed at this point it is a completely theoretical debate. But if say, there was totally normal, non violent, good hearted and rational person that just happened to be made of computer parts and hydraulics as opposed to organs, how do they deserve less rights? Are they not alive like me and you? Very Bladerunner I know but still... you have to wonder.


From a legal viewpoint, the problem is defining the threshold for sentience. Other than that, I agree with you.

#18
Jeremiah12LGeek

Jeremiah12LGeek
  • Members
  • 23 928 messages

ai--610x236.jpg

 

I am totally against giving Haley Joel Osment rights.


  • Kaiser Arian XVII aime ceci

#19
Sigma Tauri

Sigma Tauri
  • Members
  • 2 675 messages

If the AI is a person, it must provide informed consent.

 

I didn't read the whole thread yet, so fie! However, I want to comment on this.

 

Personhood is not dependent on informed consent. There are people who cannot provide informed consent, such as people with cognitive disorders.



#20
Chewin

Chewin
  • Members
  • 8 478 messages

Well one needs to take into consideration that the functioning and reasoning of human rights are similar to the functioning and cause of evolution. Evolution perpetuates the continual development of functional, reproducible organisms, whereupon human rights help develop and maintain functional, self-improving societies. Just as humans have evolved, and will continue to evolve, human rights will continue to evolve as well. Assuming strong AI will eventually develop strong sentience and emotion, the AI experience of sentience and emotion will likely be significantly different from the human experience.

 

Eventually in the future, debates about what exactly makes a human "human" will be circulating more than ever along with what is the human threshold for granting human rights exactly, what AI candidates does it apply to specifically, the matter about  consciousness and cognition, panpsychism, and so forth. 

 

It is a very intricate question with pros and cons for both sides and a lot of aspects to consider. I myself don't have a clear answer to the question, since it depends a lot on how we as humans have developed in the future, which I don't tend to speculate much upon.


  • SwobyJ aime ceci

#21
KingTony

KingTony
  • Banned
  • 1 603 messages
Robot ain't got no soul until one of them writes a blues song just because it's feeling low.
  • Sigma Tauri aime ceci

#22
Killdren88

Killdren88
  • Members
  • 4 651 messages

Does it have the right to defend itself if its existence feels threatened?



#23
leighzard

leighzard
  • Members
  • 3 188 messages

Indeed at this point it is a completely theoretical debate. But if say, there was totally normal, non violent, good hearted and rational person that just happened to be made of computer parts and hydraulics as opposed to organs, how do they deserve less rights? Are they not alive like me and you? Very Bladerunner I know but still... you have to wonder.

I maintain that an AI will never be alive.

 

There are seven requirements that must be met for something to be considered alive. Bacteria are alive, viruses are not. An AI might be able to meet five or six of those, but at the very least, cells would be lacking.  There are plenty of things that are alive that I would argue don't meet the criteria for personhood (bacteria, fungus, my deadbeat cousin), and there could conceivably be nonhuman (my dog Julep) or even inorganic (ME Fan's example) persons. 

 

Philosophy traditionally defines personhood as maintaining consciousness and developing representations of the world, which then inform how that agent (human or not) creates plans and acts on them. Volition or desire is often included in the definition of personhood as well.  Julep wants things, she can problem solve, she learns about the world, she can get others to act for her.  Does she have rights? Of course she does.  I would rightfully be punished for abusing her.  I can't infringe on her rights.  Does she have the same rights as I do?  No, she's a dog (albeit the greatest dog who has ever lived, along with her brother). 

 

I suppose that there could be different levels of legal status based on degrees of sentience, somehow?  It feels to me like the above example deserves rights commensurate with most human beings ("most" being because prisoners, for example, have fewer rights, but that doesn't make them less person-y), while maybe an AI system that functions more like an ant hive maybe doesn't have quite the same personal freedoms guaranteed.

 

 

Personhood is not dependent on informed consent. There are people who cannot provide informed consent, such as people with cognitive disorders.

Yes, but they, like children, are a protected class.  Informed consent still has to be obtained from a guardian, and a lot of people believe that protected participants' assent should also be obtained.


  • Sigma Tauri aime ceci

#24
Guest_mikeucrazy_*

Guest_mikeucrazy_*
  • Guests

Terminator1001.jpg

 

User-Toa_Quarax_RoboCop.jpg

 

the-geth.jpg

 

9c8962243dc6e34e81384148cfbb9a14b14a08ce



#25
ME_Fan

ME_Fan
  • Members
  • 1 368 messages

It's fascinating how in a page or two, there is such a wide variety of viewpoints and opinions on a topic that isn't even relevant yet. Makes you think what will happen when the time comes.


  • SwobyJ aime ceci