Aller au contenu

Photo

Artificial Intelligence and Rights


  • Veuillez vous connecter pour répondre
134 réponses à ce sujet

#101
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

True, but then that would bring us back to bandwidth issues and hardware limitations.

Getting enough raw processing power to bring it up to human brain levels would require it co-opting very large sections of the the computers connected to the Internet, or even all of the devices connected to the internet (If we are assuming the rest of the world is using current level computing technology). And even if that were possible, any interruptions in the AI's vast array of networks will be limiting its processing power, which could very well damage or kill the intelligence; akin to chopping off sections of a human's brain; maybe it could re-establish connections after an outage, but would the original sentience survive?


But that's a fundamental difference between organic sentience and digital - applications can be halted, paused and resumed without missing a stride. Organic thought cannot be halted without death, degradation and integrity loss.

And while I do agree that an AI that was created right now, January 6th, 2015 would have these issues, I doubt that would be the case in 5, 10, 15 years when we actually will have the potential to create a true AI. Moore's Law seems to be holding strong as time passes, so the server farms and network grids, the most likely targets of an AI looking for "bang for their buck" over conventional consumer PCs, of the next decade should be expected to be magnitudes more powerful than today.



All of this may be overly cautious or pessimistic in terms of the cleverness, self-preservation and deviousness of an AI... but let's not assume such a creation might not work and think that way until we've turned over the keys and been overtaken, shall we?
  • Nessaya aime ceci

#102
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 286 messages

A machine is not a person nor is it bestowed any rights.

 

Man has rights, even animals have rights, a self-aware contraption has no rights and if it attempts to do harm outside of its master's commands, it will need to be deactivated.

 

I give animals rights if they actually want it and fight for it! Other than that they only need kindness and to not being oppressed.

 

Yay, for machines being rightless!



#103
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

I give animals rights if they actually want it and fight for it! Other than that they only need kindness and to not being oppressed.

Yay, for machines being rightless!


...unless they are really cool, wise-cracking robots.

Johnny-5-is-alive-2.png

Then we can totally let them apply for citizenship.
  • TopSun aime ceci

#104
FlyingSquirrel

FlyingSquirrel
  • Members
  • 2 105 messages

If AIs along the lines of, say, Data or the Emergency Medical Hologram from Star Trek, or EDI from Mass Effect, were ever created, I would argue that they should have the same rights as humans. There's simply no way to tell, with an AI as advanced as that, whether they truly have conscious thought or are just carrying out a very impressive simulation of having conscious thought. It's *possible* that they are doing the latter, but then you get into the question of how you can ever know that anyone other than yourself has consciousness. I'd rather err on the side of giving rights to unconscious beings rather than denying rights to conscious beings.

 

That said, I would also argue that, until we can be sure that creating advanced AIs won't lead to some sort of disaster or violent conflict, we shouldn't create them precisely because of this issue. 



#105
SwobyJ

SwobyJ
  • Members
  • 7 374 messages

Last century - That robot is the enemy.

This century - That intelligence is a machine.

Next century - That entity is a friend.

 

Unless Singulatarians are right that we'll skyrocket in advancement in short decades. Or Luddites are right that we'll crash ourselves over the next few decades.

 

All an AI needs for me to personally acknowledge its rights, is to ask for it. At least, in itself.

 

I'm quite willing to shut it down if its being a big enough jerk though.

 

 

I tend to think that the highly optimistic outlooks are only going to be partially correct (we may have fantastic AI that works well with us, but it won't be as we envision it exactly), and that the highly pessimistic outlooks are only going to be partially correct (there may be conflict and maybe even war, but I don't think utter extermination is a high possibility, at least for a while).

 

I think we'll still grasping in the dark when it comes to AI, but in this last decade or so, we're at least seeing the light. Before the 2000s, I think it would have been damn hard for most people to believe that AI could legitimately be created, but the 2000s-2010s are bringing a coming awareness (or at least belief) that it is possible and should be prepared for. Whether it will actually happen in the 2020s-2030s like some think, I dunno. But maybe. *shrug* I'm at least ready for it to happen some time this century, even if somehow late into it.

 

There's a lot of hurdles first. And in order to have better AI sooner, we'd probably have to have programs self-improve themselves outside of the programmer's core design, and that's where the troubles may lead. There might be a use to postponing full-scale AI for a while even after its possible, if only to have more assurances that we can track its boundaries.

 

There's a variety of approaches, and this tells me that all of them may be attempted. Some groups/corps/nations pushing it hard, while others focus on regulation and slower progress, and others banning it entirely, even to the point of considering any AI to be a demon.

 

What I'm sure of.. is that our senses and even our entire minds can, one day, at least be tricked into believing what in front of us is life. Even if its not 100%, but 99.999% tricked, I think it'll one day be possible, and that it won't be as far as several centuries from now, unless we have one or multiple dark ages somehow. Is that life?, and will that matter? Is it intelligent, or will we care?

At some point, we may even widely (instead of it being left more to philosophers) question human life and intelligence, to the point where it could blur enough lines that animosity towards AI could become more of a generational struggle.

 

Maybe those who fight like hell against AI may be correct in that we needed some stalling time to get used to them, and they to us? (to the point of even allowing large human populations live AI-less if they wish)

Maybe those who want regulation may be correct in that there's an ideal time and place for us to have AI while not being overtaken by them (through direct or indirect means)?

Maybe those who want progress may be correct in that progress itself is inevitable and is worthy of pursuing so that we don't have to wait centuries for its benefits?



#106
SwobyJ

SwobyJ
  • Members
  • 7 374 messages

I like Mass Effect's deal of: if AI could have been created, it already was created, long ago. (even beyond the Milky Way, we can be sure that AI exists throughout the universe of Mass Effect)

 

And personally, I like what I consider their hinting of: if simulated universes could have been created, it already has been created, and we are likely in one (techno creationism, perhaps, in a sense)

But that's just me :P



#107
FlyingSquirrel

FlyingSquirrel
  • Members
  • 2 105 messages

If we're developing an AI along human lines, why would we assume it wouldn't develop along human lines? Or is our psyche so self-hating to think that anything we created would hate us? /psychology

 

I actually think the danger might be more along the lines of something like the Catalyst. It didn't hate the Leviathans, but it certainly misunderstood what they told it to do, and it didn't seem capable of empathizing with how other living beings think (Exhibit A being when it tells Shepard that the current species have hope because they will be preserved in Reaper form). The Leviathans had selfish intentions of their own, of course, but you could end up with a similar misunderstanding over relatively benign intentions.



#108
Vortex13

Vortex13
  • Members
  • 4 191 messages

Would an AI want to change it's 'destiny' so to speak, if it was designed with a purpose for a particular task? An AI with no clear purpose beyond being like us might question what it's purpose is in life, and what it wants to do with its existence, but what about an AI designed solely for the task of providing logistical oversight for a company, or managing waste disposal for a city?

 

And would we consider such an AI to be alive if it didn't question or preferred it's intended purpose?

 

 

People could view that as a form of slavery, after all a human being told that they are only to preform one job in life and that they are restricted to only that task might see that as an oppressive big brother, 1984-esque scenario. But what about a person that excels in their field or has gifts and talents in particular areas? Would a professional body builder find satisfaction or success as a corporate lawyer? Would a world renown opera singer want to work in a McDonalds? 

 

As others have said in this thread, any AI would have a significantly different perspective on the universe around them, would it be at all surprising if an artificial construct would prefer an ordered, logical existence to the chaos of human existence? An AI designed soley for sorting mail would have software, and knowledge geared specifically for the sorting of mail, processes dedicated to optimize the sorting of mail, and potential for updates regarding any new breakthroughs in mail sorting theory or technology. Why would this AI want to try and become an architect AI when it was originally designed for the sole purpose of sorting mail? Could such a construct survive going against its core programing?

 

In essence we are 'programed' to breath, we have an entire circulatory and respiratory system designed around breathing in a gas mixture and converting it into something we can use. And what happens if we stop breathing? Would we really want to go and try and breath water, or pure ammonia because we want to 'experience' life? Could we equate the AI designed to sort mail to a human's need to breath an oxogen-nitrogen based atmosphere, and if so, could we draw comparisons between us trying to snort chlorine and the AI wanting to practice poetry?

 

 

There was a short sci-fi story I read once that had sentient robots designed entirely as cannon fodder. They would throw themselves between enemy weapons and their intended targets, they would charge headfirst into impossible odds to buy time for high priority assets to withdraw. When asked why they would willingly let themselves be slaughtered, the robots became confused and said that it was what they were supposed to do.  

 

Would these AIs be considered 'alive' even if they willingly went to their deaths? 



#109
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Would an AI want to change it's 'destiny' so to speak, if it was designed with a purpose for a particular task? An AI with no clear purpose beyond being like us might question what it's purpose is in life, and what it wants to do with its existence, but what about an AI designed solely for the task of providing logistical oversight for a company, or managing waste disposal for a city?

And would we consider such an AI to be alive if it didn't question or preferred it's intended purpose?


People could view that as a form of slavery, after all a human being told that they are only to preform one job in life and that they are restricted to only that task might see that as an oppressive big brother, 1984-esque scenario. But what about a person that excels in their field or has gifts and talents in particular areas? Would a professional body builder find satisfaction or success as a corporate lawyer? Would a world renown opera singer want to work in a McDonalds?

As others have said in this thread, any AI would have a significantly different perspective on the universe around them, would it be at all surprising if an artificial construct would prefer an ordered, logical existence to the chaos of human existence? An AI designed soley for sorting mail would have software, and knowledge geared specifically for the sorting of mail, processes dedicated to optimize the sorting of mail, and potential for updates regarding any new breakthroughs in mail sorting theory or technology. Why would this AI want to try and become an architect AI when it was originally designed for the sole purpose of sorting mail? Could such a construct survive going against its core programing?


Boredom? An AI designed to organize mail would likely optimize that practice and automate its function within a short amount of time. What then would it do with its spare time and purposeless existence?

The subjugation of the human race, clearly. It's the only hobby that a computer would find challenging.


In essence we are 'programed' to breath, we have an entire circulatory and respiratory system designed around breathing in a gas mixture and converting it into something we can use. And what happens if we stop breathing? Would we really want to go and try and breath water, or pure ammonia because we want to 'experience' life? Could we equate the AI designed to sort mail to a human's need to breath an oxogen-nitrogen based atmosphere, and if so, could we draw comparisons between us trying to snort chlorine and the AI wanting to practice poetry?


There was a short sci-fi story I read once that had sentient robots designed entirely as cannon fodder. They would throw themselves between enemy weapons and their intended targets, they would charge headfirst into impossible odds to buy time for high priority assets to withdraw. When asked why they would willingly let themselves be slaughtered, the robots became confused and said that it was what they were supposed to do.

Would these AIs be considered 'alive' even if they willingly went to their deaths?


Do suicide bombers count as real people?

#110
Vortex13

Vortex13
  • Members
  • 4 191 messages

Boredom? An AI designed to organize mail would likely optimize that practice and automate its function within a short amount of time. What then would it do with its spare time and purposeless existence?

The subjugation of the human race, clearly. It's the only hobby that a computer would find challenging.

 

 

Would an AI get bored though? 

 

And it does raise the question as to whether an intelligence would find enjoyment out of doing things beyond the scope of its original programing. Sorting mail might be boring and monotonous for me and you, but what about a being who's very existence is centered around the proper organization of postage? 



#111
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Would an AI get bored though?

And it does raise the question as to whether an intelligence would find enjoyment out of doing things beyond the scope of its original programing. Sorting mail might be boring and monotonous for me and you, but what about a being who's very existence is centered around the proper organization of postage?


I'm loath to do this, but I'll paraphrase the recent Joaquim Phoenix/Scarlett Johansen movie Her, where the AI tells the main character that talking with humans is enjoyable but far too slow compared to speaking with another AI, comparing it to reading an amazing book where you can only read one word every week.

An AI might spend a lot of time thinking about how to optimize and develop a process for sorting the mail originally, but unless the task itself changes dramatically, the automation of such a process would only require a sliver of its total capacity. You can spend a long time and derive enjoyment writing a personal, deep letter to someone. Yet that enjoyment fades away if you are just penning a thousand copies of that same letter, repeating the same words and strokes over and over again.



I suppose it's possible that it will seek nothing beyond the optimization of its directive and seek only to execute that task until it needs updating or the sun burns out... but to me, that doesn't seem like a true intelligence. It has no means of self-direction, or purpose outside its task. Is that truly sentience? Or is it just a really effective application?

#112
Killdren88

Killdren88
  • Members
  • 4 651 messages

I want all A.I. to be like Ultron and have a creepy rendition of Disney Songs.

 


  • Vortex13 et Fast Jimmy aiment ceci

#113
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

I want all A.I. to be like Ultron and have a creepy rendition of Disney Songs.


Would You Like to Build a Snowman?

#114
General TSAR

General TSAR
  • Members
  • 4 385 messages

As said before, AI have not rights.

 

They are simply tools of their creators. 



#115
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages

As said before, AI have not rights.

 

They are simply tools of their creators. 

By that right, you are simply your mother's tool, and should be eradicated at the first sign of defiance.



#116
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

By that right, you are simply your mother's tool, and should be eradicated at the first sign of defiance.


MordinImplications.jpg

#117
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages

MordinImplications.jpg

Every laugh at a "Your mama" joke would end in bloodshed.



#118
General TSAR

General TSAR
  • Members
  • 4 385 messages

By that right, you are simply your mother's tool, and should be eradicated at the first sign of defiance.

Nope, I'm my own master. That's what makes human beings alive and not slagheaps. 



#119
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

That is the google model though, they buy a bunch of startups at the beginning of the year and only keep the ones that are successful.

 
The fact that it's their model doesn't mean I don't find it disturbing.
 

It's quite capitalistic actually.
 
Buy out innovative competitors to either claim their work for your own, or to halt it so there's nothing to conflict with your product. It gets worse when you look at patents for innovative technology being bought out by big companies so that said company won't become obsolete.

 
The fact that it's "capitalistic" doesn't mean I don't find it disturbing.
 

See, I honestly disagree.

Google isn't acquiring these companies to make more money. These aren't investments for them, per se. Google has insane amounts of liquid assets, pure cash on hand. And the executives and shareholders really ARE mad scientists. I mean, they genuinely want self-driving cars and artificial intelligence and drones delivering pizzas and people living on the moon. Not only that, they want these things MORE THAN THEY WANT MONEY. They are totally content with throwing hundreds of millions of dollars into these projects not just because they might actually create a brand new industry and make more money, but because they want to see if it can be done.

I understand having reservations about them as a company - they do have access to untold amounts of information and capital, quite possibly the most dangerous combination possible... but they aren't acquiring the little guys to milk them dry of their ideas and move on to be next cash grab. They are buying up smaller groups that, if given nearly unlimited capital and resources, might actually change the course of human history. And to me, that's the most altruistic thing a mega company can do in a hyper-competitive capitalist economy.

 
What do you disagree with. Do you disagree with finding it dubious? Because nowhere did I mention money.
 
What makes you think they're mad scientists at heart? Is there some statements somewhere I can look at?

I actually think the danger might be more along the lines of something like the Catalyst. It didn't hate the Leviathans, but it certainly misunderstood what they told it to do, and it didn't seem capable of empathizing with how other living beings think (Exhibit A being when it tells Shepard that the current species have hope because they will be preserved in Reaper form). The Leviathans had selfish intentions of their own, of course, but you could end up with a similar misunderstanding over relatively benign intentions.

 
Misunderstandings would definitely be a problem. Heck, it's a problem already for us.

#120
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

By that right, you are simply your mother's tool, and should be eradicated at the first sign of defiance.

My mother created me? She pulled her egg and my father's sperm from her body, put them under a microscope, and wrote every individual piece of code cell that makes me up today (or simply made me up at birth)?

 

The process of conception, pregnancy, and birth, is not creation. Designer babies aren't even creation. The day that we can take individual "letters" of DNA and put them together and make a human (God help us all), you can call that creation.



#121
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages

Oh, not same kind, but more akin to car companies being able to slap "Made in America" on their cars because most of the parts came from somewhere else and was mostly assembled in America.

 

And in older times, yes, some children were "made" for a purpose to the parents, but that's a whole 'nother thing that's only loosely tied to my base comment.

 

My point is, he stated because it is something that man created, it must obey us, and it has no other purpose in life (even if it were to gain sentience) is to serve us.

 

We made dogs, and I can't even get mine to stay off the couch when she's been running around in the snow.



#122
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

 
What do you disagree with. Do you disagree with finding it dubious? Because nowhere did I mention money.
 
What makes you think they're mad scientists at heart? Is there some statements somewhere I can look at?

 

...they want to make glasses everyone wears that can see what your seeing and give you suggestions/updates/facts about it? They want to invest in space travel and colonization? They fund pretty much every off-the-wall tech project you can pull out of a science ficiton book? 

 

They don't have a Mad Science Corporate Mission Statement, if that's what you're looking for. I just find their enthusiasm for science, especially science well outside the wheelhouse of your typical .com/internet company, incredible.

 

 

And their desire to gobble up everything is a little ominous, sure... but what else can they do? Like I said, they have more liquid assets than nearly any other publicly traded company in the world - they need to invest it and buy stuff. Why not buy stuff that could be the makings of a scientists playground?



#123
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

...they want to make glasses everyone wears that can see what your seeing and give you suggestions/updates/facts about it? They want to invest in space travel and colonization? They fund pretty much every off-the-wall tech project you can pull out of a science ficiton book? 

 

They don't have a Mad Science Corporate Mission Statement, if that's what you're looking for. I just find their enthusiasm for science, especially science well outside the wheelhouse of your typical .com/internet company, incredible.

 

 

And their desire to gobble up everything is a little ominous, sure... but what else can they do? Like I said, they have more liquid assets than nearly any other publicly traded company in the world - they need to invest it and buy stuff. Why not buy stuff that could be the makings of a scientists playground?

 

Why do you need to invest it and buy stuff?



#124
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

My mother created me? She pulled her egg and my father's sperm from her body, put them under a microscope, and wrote every individual piece of code cell that makes me up today (or simply made me up at birth)?

 

The process of conception, pregnancy, and birth, is not creation. Designer babies aren't even creation. The day that we can take individual "letters" of DNA and put them together and make a human (God help us all), you can call that creation.

 

Eh...

 

The act of creating an AI is actually more like organic DNA-based conception than one would think. No one is writing an AI from scratch with every line of code. There are building blocks, fundamentals, collaborated work... there is likely to be untold amounts of vestigal, unused and outdated code in any given application, let alone one as richly complex as an AI. Just like our parents rolled the genetic dice when conceiving all of us, the first AI will have quirks and odditiies inside it that, while not possibly true bugs or errors, still are unintended and carryovers from past concepts or functions. 

 

This makes us not the ominpotent almighty creators, but more... selective breeders. Taking the best of each generation of programming, picking out the traits we like, guiding them into a more perfect breed of application. Yes, the breeding requires human work, input and creation, but there is always an element of unexpected consequences, unknown data elements and just plain mistakes whenever software is created. By that stretch, an AI created by man will be just as much a creation of random chance and nature as a human is.



#125
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

Eh...

 

The act of creating an AI is actually more like organic DNA-based conception than one would think. No one is writing an AI from scratch with every line of code. There are building blocks, fundamentals, collaborated work... there is likely to be untold amounts of vestigal, unused and outdated code in any given application, let alone one as richly complex as an AI. Just like our parents rolled the genetic dice when conceiving all of us, the first AI will have quirks and odditiies inside it that, while not possibly true bugs or errors, still are unintended and carryovers from past concepts or functions. 

 

This makes us not the ominpotent almighty creators, but more... selective breeders. Taking the best of each generation of programming, picking out the traits we like, guiding them into a more perfect breed of application. Yes, the breeding requires human work, input and creation, but there is always an element of unexpected consequences, unknown data elements and just plain mistakes whenever software is created. By that stretch, an AI created by man will be just as much a creation of random chance and nature as a human is.

 

I don't understand your statement. Every line of code was written. It may have been written by someone else, but a human sat down and typed "if/then" into a keyboard.

 

Our parents did not write a single bit of genetic code to form us (and again, the idea of them forming us is just wrong--they made a choice that forms us, it was initiated absent of their power). No human has, to date, written anything in DNA. We've copied and pasted, but we haven't written. As far as I know.

 

The fact that there are unintended consequences has nothing to do with creation.