Aller au contenu

Photo

AI (Synthetics): Friend or Foe


  • Veuillez vous connecter pour répondre
30 réponses à ce sujet

#26
GreyLycanTrope

GreyLycanTrope
  • Members
  • 12 706 messages

AresKeith wrote...

Some are Allies (EDI, Geth)

Some are foes (Reapers, Starbrat)

This

#27
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

hpjay wrote...

They do not mean anything akin to “free will” or “moral agency” as used to describe human decision-making.


Which is part of the problem, Synthetics don't have these attributes in quite the same way we do. Synthetics can be hacked short-term quite easily, or even repurposed more long-term, with no loss of cognitive ability or regard for willpower the way organics get with indoctrination.

Consider for instance what EDI did to Eva - it's really quite scary if you think about it long enough. Eva was an AI too, with a personality, goals and desires of her own. Without consulting with anyone, EDI snuffed her out like a candle, then climbed into her corpse. And the only line separating the two was processing power - in computing terms, EDI was effectively a pro wrestler grappling a schoolgirl. Eva clearly was not in favor of this plan either, because she "struggled." The wrestler held the schoolgirl down, methodically removed her skin, then once she was dead, put it on like a suit.

How can "free will" and "moral agency" relate to a process like that the same way they do to humanity?

Modifié par Optimystic_X, 12 mai 2013 - 05:36 .


#28
S.A.K

S.A.K
  • Members
  • 2 741 messages

Auld Wulf wrote...

The simple truth: Any synthetic or organic is a friend or foe based upon our personal compatibility with them. If we can understand each other enough to be compatible, then we are friends. This applies to all life, it doesn't matter whether that life is synthetic or organic.


For once I can completely agree with you. They are just like organics. The Alliance is an ally, but Cerberus is an enemy. Edi is an ally, but Heretics are an enemy.

#29
hpjay

hpjay
  • Members
  • 205 messages

Optimystic_X wrote...

hpjay wrote...

They do not mean anything akin to “free will” or “moral agency” as used to describe human decision-making.


Which is part of the problem, Synthetics don't have these attributes in quite the same way we do. Synthetics can be hacked short-term quite easily, or even repurposed more long-term, with no loss of cognitive ability or regard for willpower the way organics get with indoctrination.

Consider for instance what EDI did to Eva - it's really quite scary if you think about it long enough. Eva was an AI too, with a personality, goals and desires of her own. Without consulting with anyone, EDI snuffed her out like a candle, then climbed into her corpse. And the only line separating the two was processing power - in computing terms, EDI was effectively a pro wrestler grappling a schoolgirl. Eva clearly was not in favor of this plan either, because she "struggled." The wrestler held the schoolgirl down, methodically removed her skin, then once she was dead, put it on like a suit.

How can "free will" and "moral agency" relate to a process like that the same way they do to humanity?

 

SYNTHETICS don't exist in real life.   Maybe they'll never exist.  So any statement about what attributes they do or do not possess, or statement about how easily they can be "hacked"  has no empirical basis.  Its all speculation.  What I'm looking for isn't  the view of AI as presented by the Mass Effect writers...  I'm more interested in how people think a real AI might interact with humans in the real world (assuming that a real AI could ever be created).

The context of the quote is with regards to Lethal Autonomous Robots and the UN report; specifically, do EDI and the Geth fall under the definition of  Lethal Autonomous Robots.  The quote shows that they do not since EDI and the Geth would be strong AIs and would therefore have "free-will" and  "moral agency". 

Modifié par hpjay, 12 mai 2013 - 05:57 .


#30
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

hpjay wrote...


SYNTHETICS don't exist in real life.


The whole point of sci-fi is to consider such things before they become science. Sci-fi had cellphones (and smartphones!) long before anyone thought they would be possible. And it also had robots first, even though it tends to focus more on AI and androids than the more common industrial robots and drones in existence today.

So while true AI doesnt' exist - once it does, they're going to look at documents like this for guidance on what to do. It's like the CDC's zombie apocalypse guide - it's humorous, sure, but it's also there just in case.


hpjay wrote...


The context of the quote is with regards to Lethal Autonomous Robots and the UN report; specifically, do EDI and the Geth fall under the definition of  Lethal Autonomous Robots.  The quote shows that they do not since EDI and the Geth would be strong AIs and would therefore have "free-will" and  "moral agency". 


And that was the point of the story I raised - an AI's concept of free will and moral agency may differ sharply from ours.

But the more basic definition (able to choose targets on its own) does apply in every case.

Modifié par Optimystic_X, 12 mai 2013 - 06:23 .


#31
hpjay

hpjay
  • Members
  • 205 messages

Optimystic_X wrote...

hpjay wrote...


SYNTHETICS don't exist in real life.


The whole point of sci-fi is to consider such things before they become science. Sci-fi had cellphones (and smartphones!) long before anyone thought they would be possible. And it also had robots first, even though it tends to focus more on AI and androids than the more common industrial robots and drones in existence today.

So while true AI doesnt' exist - once it does, they're going to look at documents like this for guidance on what to do. It's like the CDC's zombie apocalypse guide - it's humorous, sure, but it's also there just in case.


hpjay wrote...


The context of the quote is with regards to Lethal Autonomous Robots and the UN report; specifically, do EDI and the Geth fall under the definition of  Lethal Autonomous Robots.  The quote shows that they do not since EDI and the Geth would be strong AIs and would therefore have "free-will" and  "moral agency". 


And that was the point of the story I raised - an AI's concept of free will and moral agency may differ sharply from ours.

But the more basic definition (able to choose targets on its own) does apply in every case.




My point in calling out the fact that SYTHETICS don't exist in real life was to call out your use of definitive statements with regards to said SYNTHETICS.  You were speaking authoritatively on how an AI would act and how it would be hackable.  

As for an " AI's concept of free will and moral agency may differ sharply from ours"  well...   its like your speakin a different language here.  The discussion was about whether EDI and Geth meet the definition of Lethal Authonomous Robot as described in the UN statement.  How an AIs view of those concepts may differ from ours (if thats even possible but that discussion gets bogged down in semantics and I'm not going there) is immaterial to that point... its a non-sequitur.  

Modifié par hpjay, 12 mai 2013 - 06:58 .