Aller au contenu

Photo

AI (Synthetics): Friend or Foe


  • Veuillez vous connecter pour répondre
30 réponses à ce sujet

#1
hpjay

hpjay
  • Members
  • 205 messages
I just created a poll, see here:  http://social.biowar...21/polls/45610/

So what do y'all think? Will the development of AI mean the end of humanity, or will we be able to peacefully co-exist with our silicon based brothers. I'm asking this in the context of the ME3 ending and the StarChild's reason for the Reapers (that synthtics must always rebell against their creators and will eventually sterilize the galaxy (universe)). 

For the record, I lean towards peaceful co-existance (i.e. Commander Data and Johnny 5).  The idea that we can't come to an understanding with a sentient and intelligent life form simply because it is based on silicon and not carbon is the opposite of everything I believe with regards to tolerance and diversity.  I am reminded of the Gargantius Effect from The Cyberiad By Stanisław Lem (see The First Sally, or The Trap of Gargantius) or even They're Made Out of Meat by Terry Bisson.

#2
Wayning_Star

Wayning_Star
  • Members
  • 8 016 messages
the question is actually about the created rebel against the creator. It's not just about synthetic life forms vs. organic intelligence, or life, depending on how life is attained. About every being seems intent on some kind of ascension. That being on top of the heap, as it were. Ties in with evolution and the risk management there of.

Does humanity "mean" the end of humanity? Same question really..

#3
shodiswe

shodiswe
  • Members
  • 4 999 messages
I think there are several different possibilities, the Catalyst simplified things.

#4
Wayning_Star

Wayning_Star
  • Members
  • 8 016 messages

shodiswe wrote...

I think there are several different possibilities, the Catalyst simplified things.


so that's why everyone/Shepards wish to toss him out an airlock?

who'd of thought?!?Image IPB

#5
Morlath

Morlath
  • Members
  • 579 messages
http://en.wikipedia..../Uncanny_valley

Uncanny Valley - The closer an robot (AI) gets to appearing as human, the human emotional reaction grows in positivity and an empathic bond is form until a critical point is reached. Once this point occurs, the reaction switches to revulsion until the robot continues its development to be further human and thus swinging the human reaction back into the positive and empathic zone.

The Geth were fine until they began to get sentience without appearing more human, the Catalyst is horrific in its humanoid form and its alien perspective. The human condition is certainly an interesting thing to observe.

#6
Auld Wulf

Auld Wulf
  • Members
  • 1 284 messages
The simple truth: Any synthetic or organic is a friend or foe based upon our personal compatibility with them. If we can understand each other enough to be compatible, then we are friends. This applies to all life, it doesn't matter whether that life is synthetic or organic.

#7
AresKeith

AresKeith
  • Members
  • 34 128 messages
Some are Allies (EDI, Geth)

Some are foes (Reapers, Starbrat)

#8
MassivelyEffective0730

MassivelyEffective0730
  • Members
  • 9 230 messages
That depends on them just as much as us.

#9
Eain

Eain
  • Members
  • 1 501 messages
I'm very glad Auld Wulf wrote that all in cursive or I would not have appreciated the gravity of his statement. Either way, as far as them being a friend or a foe is concerned it'd probably be 50/50. What people seem to forget is that if we're talking about a sentient lifeform, synthetic or otherwise, then the choice is theirs as much as it is. If they want to be friends, they'll be friends. They're free to make up their own mind.

The problem is in regarding all synthetic life as simply being robots who should be nice to us. Ironically, that will lead to war. They don't owe us anything.

EDIT: Poster above me beat me to it.

Modifié par Eain, 12 mai 2013 - 03:13 .


#10
The Night Mammoth

The Night Mammoth
  • Members
  • 7 476 messages
Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.

#11
justafan

justafan
  • Members
  • 2 407 messages
There will be saints and sinners amongst them like with all lifeforms, which I think is one of the key themes of the MEU. We could end up with an EDI, or we could find ourselves with Catalyst 2.0. Most likely though they will end up like the Geth, some will want to understand and coexist, others will want to kill us all, and everywhere in between.

#12
PsyrenY

PsyrenY
  • Members
  • 5 238 messages
That depends heavily on the circumstances of their creation. Unfortunately, the chances of a peaceful birth are pretty slim - for one, how many people on the planet don't even know what AI are, much less are in favor of the idea? And for two, the people most likely to be pursuing this kind of research are militaries, which means the AI will be designed for that purpose. Which is more likely - farming AI, or AI to hack Iran's nuclear program?

#13
SpamBot2000

SpamBot2000
  • Members
  • 4 463 messages
Time to post this link again:

www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf

Modifié par SpamBot2000, 12 mai 2013 - 03:42 .


#14
shodiswe

shodiswe
  • Members
  • 4 999 messages

MassivelyEffective0730 wrote...

That depends on them just as much as us.


Absolutely.

#15
shodiswe

shodiswe
  • Members
  • 4 999 messages

The Night Mammoth wrote...

Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.


That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.

#16
o Ventus

o Ventus
  • Members
  • 17 251 messages

shodiswe wrote...

The Night Mammoth wrote...

Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.


That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.


Implying that an AI can feel emotion.

#17
Aaleel

Aaleel
  • Members
  • 4 427 messages
At the end of the day Synthetics are going to do whatever the compute is the best option without regard for emotion of common sense.

Synthetics are no different than organics, self preservation is always going to win out. If the perceive something organics are doing as a threat, whether it is or not they're going to act.

It all depends on the circumstances.

#18
hpjay

hpjay
  • Members
  • 205 messages

SpamBot2000 wrote...

Time to post this link again:

www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf


Lethal Autonomous Robots aren't really the same thing as a hypothetical sentient and intelligent artificial life form.

#19
hpjay

hpjay
  • Members
  • 205 messages

o Ventus wrote...

shodiswe wrote...

The Night Mammoth wrote...

Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.


That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.


Implying that an AI can feel emotion.


Why wouldn't an AI have emotions. Its all hypothetical in real life, but if we look at fiction several AI's have been portrayed as having emotions. Look at the movie AI, or Johnny 5, or Commander Data, or the constructors from the Lem link in the OP. AI doesn't necessarily preclude emotions.

#20
MassivelyEffective0730

MassivelyEffective0730
  • Members
  • 9 230 messages

The Night Mammoth wrote...

Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.


It could go a bit further than that. What if we have separate goals, but they're willing to co-exist. You don't have to be friends with them, or even associate with them. They live their life, and you live yours. The way you worded it sounds a bit like you're saying that they must either have the same intentions or be destroyed. 

#21
teh DRUMPf!!

teh DRUMPf!!
  • Members
  • 9 142 messages
 Your creations are your children. They require time and effort to learn and understand things.

Odds are, the more you time you spend, the better off your relationship with them will be.

That being said, they will always be vulnerable so long as one sufficiently powerful person looks to seize control of them.

#22
The Night Mammoth

The Night Mammoth
  • Members
  • 7 476 messages

shodiswe wrote...

The Night Mammoth wrote...

Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.


That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.


I got ahead of myself a little, I was thinking of it in the context of the Reaper war, and not in a broader sense. Though I would add, that it would apply to everyone, in the context I was thinking of, and not just synthetics.

#23
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

hpjay wrote...

Lethal Autonomous Robots aren't really the same thing as a hypothetical sentient and intelligent artificial life form.


And yet, if you actually read the definition in the link, it's more than broad enough to include both EDI and the Geth. They have the capability of selecting targets on their own, even if they choose not to use it.

#24
hpjay

hpjay
  • Members
  • 205 messages

Optimystic_X wrote...

hpjay wrote...

Lethal Autonomous Robots aren't really the same thing as a hypothetical sentient and intelligent. artificial life form.


And yet, if you actually read the definition in the link, it's more than broad enough to include both EDI and the Geth. They have the capability of selecting targets on their own, even if they choose not to use it.


Yes I did, but apparently you didn't.  Section 43 under A. The emergence of LARs
1. Definitions
reads...

43. The terms “autonomy” or “autonomous”, as used in the context of robots, can be misleading. They do not mean anything akin to “free will” or “moral agency” as used to describe human decision-making. Moreover, while the relevant technology is developing at an exponential rate, and full autonomy is bound to mean less human involvement in 10 years‟ time compared to today, sentient robots, or strong artificial intelligence are not currently in the picture. 

Modifié par hpjay, 12 mai 2013 - 05:25 .


#25
nos_astra

nos_astra
  • Members
  • 5 048 messages

hpjay wrote...
Why wouldn't an AI have emotions. Its all hypothetical in real life, but if we look at fiction several AI's have been portrayed as having emotions. Look at the movie AI, or Johnny 5, or Commander Data, or the constructors from the Lem link in the OP. AI doesn't necessarily preclude emotions.

They have been portrayed this way because the writers are human and the audience is human. It's really hard to write something alien that the audience will still identify with or find approachable.

We tend to see emotions as a given and are easily put off by a person who doesn't have them, doesn't understand them or doesn't display them the way we do. Depending on how willing they are to humor the rest of us, assuming they have the ability, they are possibly considered psychologically/neurologically disabled.

I assume, that's why AI very often act/think/try to feel like us if the writers want them to be liked by the audience.

Modifié par klarabella, 12 mai 2013 - 05:35 .