AI (Synthetics): Friend or Foe
#1
Posté 12 mai 2013 - 12:33
So what do y'all think? Will the development of AI mean the end of humanity, or will we be able to peacefully co-exist with our silicon based brothers. I'm asking this in the context of the ME3 ending and the StarChild's reason for the Reapers (that synthtics must always rebell against their creators and will eventually sterilize the galaxy (universe)).
For the record, I lean towards peaceful co-existance (i.e. Commander Data and Johnny 5). The idea that we can't come to an understanding with a sentient and intelligent life form simply because it is based on silicon and not carbon is the opposite of everything I believe with regards to tolerance and diversity. I am reminded of the Gargantius Effect from The Cyberiad By Stanisław Lem (see The First Sally, or The Trap of Gargantius) or even They're Made Out of Meat by Terry Bisson.
#2
Posté 12 mai 2013 - 12:43
Does humanity "mean" the end of humanity? Same question really..
#3
Posté 12 mai 2013 - 12:45
#4
Posté 12 mai 2013 - 12:50
shodiswe wrote...
I think there are several different possibilities, the Catalyst simplified things.
so that's why everyone/Shepards wish to toss him out an airlock?
who'd of thought?!?
#5
Posté 12 mai 2013 - 01:03
Uncanny Valley - The closer an robot (AI) gets to appearing as human, the human emotional reaction grows in positivity and an empathic bond is form until a critical point is reached. Once this point occurs, the reaction switches to revulsion until the robot continues its development to be further human and thus swinging the human reaction back into the positive and empathic zone.
The Geth were fine until they began to get sentience without appearing more human, the Catalyst is horrific in its humanoid form and its alien perspective. The human condition is certainly an interesting thing to observe.
#6
Posté 12 mai 2013 - 03:07
#7
Posté 12 mai 2013 - 03:09
Some are foes (Reapers, Starbrat)
#8
Posté 12 mai 2013 - 03:11
#9
Posté 12 mai 2013 - 03:12
The problem is in regarding all synthetic life as simply being robots who should be nice to us. Ironically, that will lead to war. They don't owe us anything.
EDIT: Poster above me beat me to it.
Modifié par Eain, 12 mai 2013 - 03:13 .
#10
Posté 12 mai 2013 - 03:16
#11
Posté 12 mai 2013 - 03:16
#12
Posté 12 mai 2013 - 03:35
#13
Posté 12 mai 2013 - 03:41
www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf
Modifié par SpamBot2000, 12 mai 2013 - 03:42 .
#14
Posté 12 mai 2013 - 03:47
MassivelyEffective0730 wrote...
That depends on them just as much as us.
Absolutely.
#15
Posté 12 mai 2013 - 03:49
The Night Mammoth wrote...
Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.
That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.
#16
Posté 12 mai 2013 - 03:56
shodiswe wrote...
The Night Mammoth wrote...
Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.
That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.
Implying that an AI can feel emotion.
#17
Posté 12 mai 2013 - 04:09
Synthetics are no different than organics, self preservation is always going to win out. If the perceive something organics are doing as a threat, whether it is or not they're going to act.
It all depends on the circumstances.
#18
Posté 12 mai 2013 - 04:10
SpamBot2000 wrote...
Time to post this link again:
www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf
Lethal Autonomous Robots aren't really the same thing as a hypothetical sentient and intelligent artificial life form.
#19
Posté 12 mai 2013 - 04:16
o Ventus wrote...
shodiswe wrote...
The Night Mammoth wrote...
Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.
That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.
Implying that an AI can feel emotion.
Why wouldn't an AI have emotions. Its all hypothetical in real life, but if we look at fiction several AI's have been portrayed as having emotions. Look at the movie AI, or Johnny 5, or Commander Data, or the constructors from the Lem link in the OP. AI doesn't necessarily preclude emotions.
#20
Posté 12 mai 2013 - 04:31
The Night Mammoth wrote...
Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.
It could go a bit further than that. What if we have separate goals, but they're willing to co-exist. You don't have to be friends with them, or even associate with them. They live their life, and you live yours. The way you worded it sounds a bit like you're saying that they must either have the same intentions or be destroyed.
#21
Posté 12 mai 2013 - 04:46
Odds are, the more you time you spend, the better off your relationship with them will be.
That being said, they will always be vulnerable so long as one sufficiently powerful person looks to seize control of them.
#22
Posté 12 mai 2013 - 04:53
shodiswe wrote...
The Night Mammoth wrote...
Depends. Do their goals match mine? If yes, then they are friends. If no, then I'll throw them in a river to fry.
That also depnds, are your goals to enslave them or treat them like a second class Citizen? It's also possible they might throw you in a river if you become too much of a jerk.
I got ahead of myself a little, I was thinking of it in the context of the Reaper war, and not in a broader sense. Though I would add, that it would apply to everyone, in the context I was thinking of, and not just synthetics.
#23
Posté 12 mai 2013 - 05:12
hpjay wrote...
Lethal Autonomous Robots aren't really the same thing as a hypothetical sentient and intelligent artificial life form.
And yet, if you actually read the definition in the link, it's more than broad enough to include both EDI and the Geth. They have the capability of selecting targets on their own, even if they choose not to use it.
#24
Posté 12 mai 2013 - 05:25
Optimystic_X wrote...
hpjay wrote...
Lethal Autonomous Robots aren't really the same thing as a hypothetical sentient and intelligent. artificial life form.
And yet, if you actually read the definition in the link, it's more than broad enough to include both EDI and the Geth. They have the capability of selecting targets on their own, even if they choose not to use it.
Yes I did, but apparently you didn't. Section 43 under A. The emergence of LARs
1. Definitions reads...
43. The terms “autonomy” or “autonomous”, as used in the context of robots, can be misleading. They do not mean anything akin to “free will” or “moral agency” as used to describe human decision-making. Moreover, while the relevant technology is developing at an exponential rate, and full autonomy is bound to mean less human involvement in 10 years‟ time compared to today, sentient robots, or strong artificial intelligence are not currently in the picture.
Modifié par hpjay, 12 mai 2013 - 05:25 .
#25
Posté 12 mai 2013 - 05:28
They have been portrayed this way because the writers are human and the audience is human. It's really hard to write something alien that the audience will still identify with or find approachable.hpjay wrote...
Why wouldn't an AI have emotions. Its all hypothetical in real life, but if we look at fiction several AI's have been portrayed as having emotions. Look at the movie AI, or Johnny 5, or Commander Data, or the constructors from the Lem link in the OP. AI doesn't necessarily preclude emotions.
We tend to see emotions as a given and are easily put off by a person who doesn't have them, doesn't understand them or doesn't display them the way we do. Depending on how willing they are to humor the rest of us, assuming they have the ability, they are possibly considered psychologically/neurologically disabled.
I assume, that's why AI very often act/think/try to feel like us if the writers want them to be liked by the audience.
Modifié par klarabella, 12 mai 2013 - 05:35 .





Retour en haut






