Aller au contenu

Photo

Artificial Intelligence and Rights


  • Veuillez vous connecter pour répondre
134 réponses à ce sujet

#51
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages

Maybe I'm thinking too much of an advanced A.I., but I didn't quite describe that right, just woke up and using a tablet instead of a computer.

 

Teach it morals, don't program them. It can learn, you can teach.

 

Not foolproof by any means, I'll submit to that.

 

Then again, why would we put one creature into enough power to wipe out significant portions of humanity anyway?



#52
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Maybe I'm thinking too much of an advanced A.I., but I didn't quite describe that right, just woke up and using a tablet instead of a computer.

Teach it morals, don't program them. It can learn, you can teach.

Not foolproof by any means, I'll submit to that.

Then again, why would we put one creature into enough power to wipe out significant portions of humanity anyway?


How do you stop it? The only safeguards we could put in place would be laughable to it.

Do you know how long it would take for a self-aware intelligence to write a virus capable of penetrating any security system? Or to assess and counteract any attempt to restrain or control it? You are talking about a being that can think in terms of cause and effect scenarios exponentially faster than any human alive and which can navigate and develop programming as easily as you or I have a conversation.

Aside from locking it in a box with no network access at all, what hope could we have if it wanted to do whatever it wanted? And if we just lock it in a box, devoid of any sensation but with a brain that experiences and processes thousands of thoughts a second, wouldn't that infringe on the very rights we are talking about in this thread?

#53
Vortex13

Vortex13
  • Members
  • 4 191 messages

The Geth are fictional.

If you objectively look at our reality, you'd see that humanity cannot sustain itself long term. Resource consumption, increased population, prevalence for destruction... we aren't well equipped to survive the next millennia. Heck, we're not that well equipped to survive the next CENTURY.

Science fiction almost unilaterally has space travel solve most of these problems, usually with FTL technology which lets humanity expand across the stars. Yet an AI can see no such technology exists or is on the horizon for the near future and sees our ever-growing threat to ourselves and the planet, then they would quickly surmise that our numbers need to drastically reduce. Of course, humanity would not submit to that willingly, so the answer is either long, prolonged war or snuffing out as much of our species as possible as quickly as possible to prevent full scale retaliation.


Again, that's really the only final conclusion any intelligence with all the necessary data and none of the existing sentiment would reach.

 

 

But that would assume that said AI wanted to protect the planet.

 

 

Why would an AI care about over population, pollution or the extinction of the whales? Really, an AI would probably feel more 'comfortable' living on a barren asteroid or moon with total unobstructed access to solar energy, and no pesky biological matter getting in the way of its operations. 

 

 

Which could lead back to an AI trying to wipe out humanity to remove hindrances to it's existence, but making a rocket and blasting off into space is infinitely easier than trying to subjugate/destroy the Earth.



#54
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

How do you stop it? The only safeguards we could put in place would be laughable to it.
Do you know how long it would take for a self-aware intelligence to write a virus capable of penetrating any security system? Or to assess and counteract any attempt to restrain or control it? You are talking about a being that can think in terms of cause and effect scenarios exponentially faster than any human alive and which can navigate and develop programming as easily as you or I have a conversation.
Aside from locking it in a box with no network access at all, what hope could we have if it wanted to do whatever it wanted? And if we just lock it in a box, devoid of any sensation but with a brain that experiences and processes thousands of thoughts a second, wouldn't that infringe on the very rights we are talking about in this thread?


Not really out of reach actually.

http://www.dailymail...ram-ITSELF.html

A. I will probably be a big data application anyway.

#55
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Not really out of reach actually.

http://www.dailymail...ram-ITSELF.html

A. I will probably be a big data application anyway.


Fascinating. I love Google's mad science division.

#56
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

Fascinating. I love Google's mad science division.


Google probably has the best developers in the world. They are insane. I wonder what it would be like if they worked on a game?

#57
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

But that would assume that said AI wanted to protect the planet.


Why would an AI care about over population, pollution or the extinction of the whales? Really, an AI would probably feel more 'comfortable' living on a barren asteroid or moon with total unobstructed access to solar energy, and no pesky biological matter getting in the way of its operations.


Which could lead back to an AI trying to wipe out humanity to remove hindrances to it's existence, but making a rocket and blasting off into space is infinitely easier than trying to subjugate/destroy the Earth.


An A.I. would want the Earth for the same reason you do - convenience built on the progress of past humans. An A.I. doesn't need much when compared to a human, but a steady power supply and access to complex parts are simply not possible on another planet without doing all the mining, refining, manufacturing and assembly needed.

If a computer could release a super virus that killed everyone on the planet, it would have access to all of the unused resources of the planet, with access to hard-to-get minerals like dysprosium, rhodium or thallium right on tap. Trying to find and process these on a brand new planetoid would be a huge effort, assuming the new asteroid even had them.

#58
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Google probably has the best developers in the world. They are insane. I wonder what it would be like if they worked on a game?


How do we know they haven't? How do we know we aren't playing it right now?!

Googleception!
  • Dermain aime ceci

#59
Killdren88

Killdren88
  • Members
  • 4 651 messages

How do we know they haven't? How do we know we aren't playing it right now?!

Googleception!

 

No way...



#60
mousestalker

mousestalker
  • Members
  • 16 945 messages

How do we know they haven't? How do we know we aren't playing it right now?!

Googleception!

Actually, that would explain a great deal. Google cars and Google glasses among other things...


:D
  • Fast Jimmy aime ceci

#61
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

How do we know they haven't? How do we know we aren't playing it right now?!
Googleception!


Obligatory simulation argument paper.http://www.simulatio...simulation.html
  • Sigma Tauri aime ceci

#62
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Obligatory simulation argument paper.http://www.simulatio...simulation.html


Heh. If this is a simulation, someone needs to hit Ctrl+Alt+Del. This program is no longer responding.

#63
breakdown71289

breakdown71289
  • Members
  • 4 195 messages

Uh-oh.....here comes

 

Skynet.jpg



#64
leighzard

leighzard
  • Members
  • 3 188 messages

This paper makes me want to go see The Imitation Game.  Also, Benedict Cumberbatch.


  • Sigma Tauri aime ceci

#65
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests
I'll point out that most of us live in a society that requires that we act in certain ways, that confines us. Not so personal as "off switches" and "shackles," but you try to do threatening things and you'll end up arrested/full of lead. So it's not like we're living with some wild liberal freedom to do whatever we wish.
  • leighzard et SwobyJ aiment ceci

#66
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

How do you stop it? The only safeguards we could put in place would be laughable to it.

Do you know how long it would take for a self-aware intelligence to write a virus capable of penetrating any security system? Or to assess and counteract any attempt to restrain or control it? You are talking about a being that can think in terms of cause and effect scenarios exponentially faster than any human alive and which can navigate and develop programming as easily as you or I have a conversation.

Aside from locking it in a box with no network access at all, what hope could we have if it wanted to do whatever it wanted? And if we just lock it in a box, devoid of any sensation but with a brain that experiences and processes thousands of thoughts a second, wouldn't that infringe on the very rights we are talking about in this thread?


Who says devoid of any sensation? We don't leave serial killers on the streets, we lock them in a high-security prison and throw away the key. That doesn't mean we strap them to a chair, turn out the lights, and hook them to an IV for the rest of their lives unless we're the CIA



#67
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Who says devoid of any sensation? We don't leave serial killers on the streets, we lock them in a high-security prison and throw away the key. That doesn't mean we strap them to a chair, turn out the lights, and hook them to an IV for the rest of their lives unless we're the CIA


How do you input senses to an AI? You have to have a camera feed or some form of information. Which would imply some type of data connection.

Unless you are speaking of a robot? In which case, that would be a different conversation. A robot AI is inherently less dangerous than a computer-based one. Namely because the bulk of a robot's computational activity would be consumed with exactly the same thing as a human's is - keeping its physical body working, monitoring its immediate surroundings and interacting with the world.

#68
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

I'll point out that most of us live in a society that requires that we act in certain ways, that confines us. Not so personal as "off switches" and "shackles," but you try to do threatening things and you'll end up arrested/full of lead. So it's not like we're living with some wild liberal freedom to do whatever we wish.


What type of motivation can you give an AI to follow these social norms and guidelines? We gladly conform into them for social acceptance, risk avoidance and mutual success. An AI shares none of those benefits inherently.

#69
Guest_AugmentedAssassin_*

Guest_AugmentedAssassin_*
  • Guests

I'm tempted to join this conversation, But I most likely won't. Not really up for it.

 

Beware though, I'll be joining it soon. :ph34r: :D



#70
leighzard

leighzard
  • Members
  • 3 188 messages

You'd have to program it to be susceptible to peer pressure. 

I'm not sure which is scarier, an army of homicidal robots, or tween AIs fanbotting over Justin Bieber.  Or whatever kids are into these days.


  • mousestalker aime ceci

#71
mousestalker

mousestalker
  • Members
  • 16 945 messages
Emo AI's? The world is surely doomed. :)

#72
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

Not really out of reach actually.

http://www.dailymail...ram-ITSELF.html

A. I will probably be a big data application anyway.

 

I'll be honest, I find this tendency of big companies to just buy up small companies doing innovative work to be a little bit disturbing.


  • SwobyJ aime ceci

#73
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

How do you input senses to an AI? You have to have a camera feed or some form of information. Which would imply some type of data connection.

Unless you are speaking of a robot? In which case, that would be a different conversation. A robot AI is inherently less dangerous than a computer-based one. Namely because the bulk of a robot's computational activity would be consumed with exactly the same thing as a human's is - keeping its physical body working, monitoring its immediate surroundings and interacting with the world.

 

Right the first time, I'm talking about a camera feed. A microphone/audio input. Which can be done by plugging them in the USB port, it doesn't need a network connection. It's safe.

 

EDIT: safe as long as the microphone uses mechanical methods to pick up sound waves, I think.



#74
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

What type of motivation can you give an AI to follow these social norms and guidelines? We gladly conform into them for social acceptance, risk avoidance and mutual success. An AI shares none of those benefits inherently.

 

I'm not arguing that we need to give them motivation. I'm arguing that it's not as ethically "bad" to confine the AI as put forth by the OP, because of these confines.

 

Edit: Though I think risk avoidance could be very, very easily developed by an AI, if you told it that you would shut it off it performed some threatening action.

 

Thinking about that, I have to wonder what kind of opinion an AI would develop of humans. We automatically think it would hate humans, but why? What says an AI must automatically value computation speed and efficiency? I don't think there's any aspect of an AI's "personality" that would be set in stone by their existence, just like humans (within a few bounds).

 

Mightn't the AI develop a child-like adoration for their creators? My opinion of my parents isn't based on them being more powerful than me, whether they're smarter than me or not, or anything like that, but simply who they are (I don't have any adoration for my parents, but it's a hypothetical example).

 

If we're developing an AI along human lines, why would we assume it wouldn't develop along human lines? Or is our psyche so self-hating to think that anything we created would hate us? /psychology



#75
Killdren88

Killdren88
  • Members
  • 4 651 messages

I'm not arguing that we need to give them motivation. I'm arguing that it's not as ethically "bad" to confine the AI as put forth by the OP, because of these confines.

 

Edit: Though I think risk avoidance could be very, very easily developed by an AI, if you told it that you would shut it off it performed some threatening action.

 

Thinking about that, I have to wonder what kind of opinion an AI would develop of humans. We automatically think it would hate humans, but why? What says an AI must automatically value computation speed and efficiency? I don't think there's any aspect of an AI's "personality" that would be set in stone by their existence, just like humans (within a few bounds).

 

Mightn't the AI develop a child-like adoration for their creators? My opinion of my parents isn't based on them being more powerful than me, whether they're smarter than me or not, or anything like that, but simply who they are (I don't have any adoration for my parents, but it's a hypothetical example).

 

If we're developing an AI along human lines, why would we assume it wouldn't develop along human lines? Or is our psyche so self-hating to think that anything we created would hate us? /psychology

 

Depending if it has access to historical data. An A.I. would be confused by humanity. Why fight and kill each other over different ideas, or use these ideas as guises to wage war. Why knowing pollute our world knowing the consequences of such an action? Why KNOWINGLY continue on a course of action that would harm humanity in a adverse way.