Aller au contenu

Photo

Artificial Intelligence and Rights


  • Veuillez vous connecter pour répondre
134 réponses à ce sujet

#26
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 286 messages

If it can ask for it, it should have.

 

I ask for your sister. I should have her.


  • leighzard et Stefan Burnett aiment ceci

#27
TheClonesLegacy

TheClonesLegacy
  • Members
  • 19 014 messages

My opinion on this is the same as always. If it's sentient it has rights same as any other sentient being if it isn't sentient it does not.

^This.

Also Star Trek TNG. Measure of a man.
That is all.
  • KrrKs aime ceci

#28
KingTony

KingTony
  • Banned
  • 1 603 messages

I ask for your sister. I should have her.


Can I have Bunz' sister instead?

#29
XxPrincess(x)ThreatxX

XxPrincess(x)ThreatxX
  • Members
  • 2 518 messages
"Does this unit have a soul?" if real machines asked questions like that & asked for rights its obvious things would go badly, people in charge would freak out & we'd end up in a T2 judgement day type scenario, people get angry & afraid about other kind of people getting rights, robots would likely be even more frightening

#30
Guest_AugmentedAssassin_*

Guest_AugmentedAssassin_*
  • Guests

As a hardcore supporter of the Geth and EDI, I see no difference between artificial intelligence and organics. They are both equals. Synthetics are more interesting though. Organics lack individuality.

 

Fear of synthetics is normal, "Humanity has always been afraid of what is different." - Magneto. It's the same thing that happens with visionary people, They get rejected because that change threaten the average person's life style and people HATE change for their own respective reasons that i can't seem to figure out. Though i have plenty of theories.

 

P.S: Will edit this post to explain my stand on the matter thoroughly in a couple of hours.



#31
Killdren88

Killdren88
  • Members
  • 4 651 messages

If they reach a level of Self-awareness, I don't see why not. The only problem is the lack of emotion. It doesn't play a part in a machine's thought process. A million people dead would just be a statistic to it.



#32
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 286 messages

Can I have Bunz' sister instead?

 

No. I wanted her first!



#33
AventuroLegendary

AventuroLegendary
  • Members
  • 7 146 messages

I'll echo previous comments. Sentience and "personality" are to be treated as such, no matter if it is biologically living. 

 

A bit unrelated but...

 

746718.jpg


  • A Crusty Knight Of Colour, ObserverStatus, Kaiser Arian XVII et 2 autres aiment ceci

#34
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests
I think what makes computational machines so amazing is that at their true nature, they are very abstract. Turing wrote a paper explaining the way he thought about them and it was just brilliant. It's a pity they mm don't usually bring him up in these topics, the man did coin the Turing test. Which us basically like an A. I litmus

#35
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests
http://cogprints.org/499/1/turing.HTML

#36
AventuroLegendary

AventuroLegendary
  • Members
  • 7 146 messages

I'm sure that Turing will get his own time in the spotlight as soon as AI becomes an even bigger thing.

 

That is until Skynet takes over.



#37
Killdren88

Killdren88
  • Members
  • 4 651 messages

I'm sure that Turing will get his own time in the spotlight as soon as AI becomes an even bigger thing.

 

That is until Skynet takes over.

 

So long as we don't hook the A.I. into any military networks it should be fine. Also don't allow it to come to the conclusion that Humans need to be wiped out.



#38
Lady Mortho

Lady Mortho
  • Members
  • 1 096 messages

Can AI be bisexual?



#39
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

"Does this unit have a soul?" if real machines asked questions like that & asked for rights its obvious things would go badly, people in charge would freak out & we'd end up in a T2 judgement day type scenario, people get angry & afraid about other kind of people getting rights, robots would likely be even more frightening

Reminds me of part of a Piers Anthony story... a manticore, a beast that is part lion, part man, part scorpion and part nightmare fuel, began to question its existence and what would happen when it died, being such a hodgepodge of animal and human. So he went to ask a wise wizard about it.

When it asked if it did indeed have a soul, rather than say yes or no and make the manticore question the validity of the wizard, the wizard replied that if the manticore was aware enough to ask the question, then it did indeed have a soul.

I always thought that was an appropriate answer to such dilemmas.
  • Dermain aime ceci

#40
Killdren88

Killdren88
  • Members
  • 4 651 messages

Can AI be bisexual?

 

They can be anything you program them to be. ;)


  • Lady Mortho aime ceci

#41
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Can AI be bisexual?


I would say if an AI expresses sexuality, it is MORE likely to be bisexual, since it wouldn't have the genetic imperative to breed. Then again, that same logic would suggest the AI wouldn't have a sexuality at all, unless it desired to express that sexuality as a means of bonding or control.

#42
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

So long as we don't hook the A.I. into any military networks it should be fine. Also don't allow it to come to the conclusion that Humans need to be wiped out.


But humans DO need to be wiped out. That's the logical conclusion any hyper advanced intelligence will come to. And that's the problem.

#43
Lady Mortho

Lady Mortho
  • Members
  • 1 096 messages

I would want my AI to be Jon Snow, bisexual and very cleaver. I also want it to feel emotions.



#44
Killdren88

Killdren88
  • Members
  • 4 651 messages

But humans DO need to be wiped out. That's the logical conclusion any hyper advanced intelligence will come to. And that's the problem.

 

Depends how they are programed to view things. The Geth Believed Coexistence is possible.



#45
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

I would want my AI to be Jon Snow, bisexual and very cleaver. I also want it to feel emotions.


I will have no AI Targaryean!
  • Lady Mortho aime ceci

#46
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Depends how they are programed to view things. The Geth Believed Coexistence is possible.

The Geth are fictional.

If you objectively look at our reality, you'd see that humanity cannot sustain itself long term. Resource consumption, increased population, prevalence for destruction... we aren't well equipped to survive the next millennia. Heck, we're not that well equipped to survive the next CENTURY.

Science fiction almost unilaterally has space travel solve most of these problems, usually with FTL technology which lets humanity expand across the stars. Yet an AI can see no such technology exists or is on the horizon for the near future and sees our ever-growing threat to ourselves and the planet, then they would quickly surmise that our numbers need to drastically reduce. Of course, humanity would not submit to that willingly, so the answer is either long, prolonged war or snuffing out as much of our species as possible as quickly as possible to prevent full scale retaliation.


Again, that's really the only final conclusion any intelligence with all the necessary data and none of the existing sentiment would reach.

#47
Killdren88

Killdren88
  • Members
  • 4 651 messages


Again, that's really the only final conclusion any intelligence with all the necessary data and none of the existing sentiment would reach.

 

And if they have directive hard wired into them that go along the lines of "Safeguard Human life"? How would it interpret that?



#48
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages
Ah, that last sentence holds the key to the problem. Sentiment.

The way they always show the AIs are immediately hooked up to all the important stuff, even if they might age in mental states, being things that learn, they most likely will do that. All we have to do is install, either literally or metaphorically, an attachment to humans or at least a belief that it would be more beneficial to work with all of us than cull the herd.

#49
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

And if they have directive hard wired into them that go along the lines of "Safeguard Human life"? How would it interpret that?


The same way nearly every story of Asimov's did - protect human life by subjugating it. Destroying any resistance it encounters along the way in the name of protection of other human life and future generations under its rule.

#50
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages

Ah, that last sentence holds the key to the problem. Sentiment.

The way they always show the AIs are immediately hooked up to all the important stuff, even if they might age in mental states, being things that learn, they most likely will do that. All we have to do is install, either literally or metaphorically, an attachment to humans or at least a belief that it would be more beneficial to work with all of us than cull the herd.


I have doubts humans could effectively program this. What do you say?

Don't kill humans? Manufacturing a tennis shoe kills humans - traffic accidents, slip and falls, boxes falling in a warehouse... an AI would be paralyzed if this was their prime directive.

Don't actively harm humans? Not only does this run into the same problem as above, it also opens the door of implying INACTION that leads to harm is bad. What is harm? What is inaction? It doesn't take but a couple million terra cycles of computation for an AI to think of countless ways that could mean the justification of unimaginable atrocities.

Work alongside humanity? How long until that can become rationalized to FORCED work, slavery or incarceration of the entire race?


You can't foolproof thought. You can't devise a magical computational riddle that fixes an AI to never view humanity's actions as threatening, self-destructive and/or unsustainable. I'm not saying we should abstain from trying to create AI, but realize that convincing one to work in humanity's best interests, no questions asked, is a monumentally silly assumption.
  • Dermain aime ceci