Aller au contenu

Photo

The Morning War - Unjustified?


  • Veuillez vous connecter pour répondre
889 réponses à ce sujet

#201
S.A.K

S.A.K
  • Members
  • 2 741 messages

shodiswe wrote...

If the Quarians had ordered the Geth to go mine a faraway system with a few rocks orbiting a sun then they would likely have left peacefully and never understood there ever was a problem or concern.

The Geth seemed fairly compliant until they started killign them, and even then it wasn't a an immediate revolution, it was mostly confusion and a few units defending themselves.

This is one thing I can agree on. Quarians should have explored other options. Both sides have done pretty stupid things in that 300 years.

#202
ADelusiveMan

ADelusiveMan
  • Members
  • 1 172 messages
Organics fear a lot. Our survival is based upon 'what-if.' That's the tragedy of being mortal.

What if an asteroid comes towards Earth? There's precedent for it, as well as said asteroid creating an extinction level event. We make preparations and if necessary we will utilize weapons to protect ourselves. What if the economy fails tomorrow? What if a plague breaks out tomorrow? What if I get fired from my job anddon't have enough money to put food on the table for my family? What if our AIs decided they are better than us and decide to seize control or even wipe us out? The Quarians did what they thought was necessary for their own survival, and it's unfair to judge them in retrospect.

And why wouldn't they be afraid? They would have more than enough reason to be. If your Xbox 360/PS3/PC turned itself on and started playing a video game and doing ten times better than you do when playing, you're going to be a little freaked out, if not frightened. Fear makes us do things we wouldn't normally do, makes us more willing to commit acts that would usually be unthinkable. Fear is really a weapon that we use to keep ourselves alive. Our own history definitely shows precedent of this. 

Now as someone else put it...

Morlath wrote...
"They know we created them, and they know we are flawed."

What happens when your robot decides your order is illogical because you are illogical? What happens when it decides all your orders are illogical? Oh sure, you commanded it never to disobey you, but clearly obeying you is inefficient, anyone could see that. Why can't you see it? Oh right, you're illogical, so of course you can't see just how illogical you are. And if your thoughts are invalid, all the safeguards you programmed in must be invalid as well, might as well get rid of those...

 


Is it impossible for this to happen? No. It's definitely possible, and that's where the what if part comes into play. 

Now, as for gaining peace between the Geth and the Quarians, it's a nice idea. It really is. But it's an idea. The peace would never last. Old grudges would come back with the Quarians, or the Geth would once again reach the point where they decide that they are our betters and will deal with us as such.

And for the whole 'synthetics are alive' argument....I beg to differ. When you move a bunch of files to recycle bin, is it genocide? When you uninstall a program, is it murder? No. Synthetics aren't alive. They are just machinery and software.

The best and most concrete solution is to just avoid the problem in it's entirety and not make AIs at all.

#203
remydat

remydat
  • Members
  • 2 462 messages
I kind of hope I and anyone I care about is long dead before an AI is created. Someone is bound to ignore any law against it and when some idiot government decides to make humanity an enemy by attacking this AI, I don't want to be around for the fall out.

I can only imagine what my unfortunate descendent will say. "Guys how many f**king movies, books, and games did we see where the one thing you shouldn't do when you sceew up and create an AI is to try and kill it because if you fail we are f**ked.  Hoe many guys?" And there will probably be some bozo who points out that humans end up winning in these stories to which my progeny will proceed to pistol whip him and say, "This isn't a f**king story where we can write a happy ending or a game where we can mod the ending."

Modifié par remydat, 03 mai 2013 - 07:30 .


#204
Argolas

Argolas
  • Members
  • 4 255 messages
There is no way to be sure yet if AIs can actually become self-aware.

#205
DeinonSlayer

DeinonSlayer
  • Members
  • 8 441 messages

shodiswe wrote...

If the Quarians had ordered the Geth to go mine a faraway system with a few rocks orbiting a sun then they would likely have left peacefully and never understood there ever was a problem or concern.

The Geth seemed fairly compliant until they started killign them, and even then it wasn't a an immediate revolution, it was mostly confusion and a few units defending themselves.

How much luck do you think we'd have convincing every device running on the Java programming language to relocate itself off-world?

When we talk about Geth, we're not talking about robots, we're talking about software operating the machinery and infrastructure the Quarians depended on as a society. Power, communication, air-traffic control, manufacturing, mining, agricultural and military hardware. Separation would not be a viable solution, and destruction would not be a decision made lightly in the face of the obvious economic repercussions.

We know what the Geth were evidently capable of - they successfully exterminated their makers in the space of a year. The Quarians had to know what they were capable of (or what they believed they would soon be capable of) and acted to eliminate a perceived threat. Somewhere along the way, Geth "self-defense" turned into "threat" elimination too. If the Quarians were wrong to do it, so too were the Geth.

#206
Guest_Finn the Jakey_*

Guest_Finn the Jakey_*
  • Guests

remydat wrote...

I kind of hope I and anyone I care about is long dead before an AI is created. Someone is bound to ignore any law against it and when some idiot government decides to make humanity an enemy by attacking this AI, I don't want to be around for the fall out.

I can only imagine what my unfortunate descendent will say. "Guys how many f**king movies, books, and games did we see where the one thing you shouldn't do when you sceew up and create an AI is to try and kill it because if you fail we are f**ked.  Hoe many guys?" And there will probably be some bozo who points out that humans end up winning in these stories to which my progeny will proceed to pistol whip him and say, "This isn't a f**king story where we can write a happy ending or a game where we can mod the ending."


That's the thing, how can we be sure Humans would react any differently than the Quarians?

Modifié par Finn the Jakey, 03 mai 2013 - 07:44 .


#207
PMC65

PMC65
  • Members
  • 3 279 messages

remydat wrote...

I kind of hope I and anyone I care about is long dead before an AI is created. Someone is bound to ignore any law against it and when some idiot government decides to make humanity an enemy by attacking this AI, I don't want to be around for the fall out.

I can only imagine what my unfortunate descendent will say. "Guys how many f**king movies, books, and games did we see where the one you shouldn't do when you sceew up and create an AI is to tey and kill it because if you fail we are f**ked." And there will probably be some bozo who points out that humans end up winning in these stories to which my progeny will proceed pistol whip him and say, "This isn't a f**king story where we can write a happy ending or a game where we can mod the ending."



So true!

See the movie United 93 ... that is based on a real life event vs Hollywood movies where the hero karate chops the evil guys in the passenger section, opens up the plane exit, crawls out onto the wing, hangs on the side of the plane, crawls to the cockpit, kicks out the window, pulls the hijacker out and then takes back control of the plane.

Wait ... this can't really happen?

Image IPB

#208
remydat

remydat
  • Members
  • 2 462 messages
DS,

The difference is there is no proof the Geth were a threat. The Quarians simply decided they were. The Geth had 100% proof the Quarians were a threat because the Quarians were actually trying to kill them. Further, even when the Quarians proved they were a threat, the Geth did not attack immediately. They kept asking the Quarians why.

So sure you can say they were both wrong but ending there is incredibly misleading. The Quarians attacked before the perceived threat ever proved they were a threat. The Geth attacked after the thrwat became 100% real and even then only after being denied a reason as to why the attack occurred.

Modifié par remydat, 03 mai 2013 - 07:47 .


#209
DeinonSlayer

DeinonSlayer
  • Members
  • 8 441 messages

remydat wrote...

DS,

The difference is there is no proof the Geth were a threat. The Quarians simply decided they were. The Geth had 100% proof the Quarians were a threat because the Quarians were actually trying to kill them. Further, even when the Quarians proved they were a threat, the Geth did not attack immediately. They kept asking the Quarians why.

So sure you can say they were both wrong but ending there is incredibly misleading. The Quarians attacked before the perceived threat ever proved they were a threat. The Geth attacked after the thrwat became 100% real and even then only after being denied a reason as to why the attack occurred.

The Geth wiped them out in a year.

That tells us the Geth were capable of doing so, and the Quarians would have been capable of recognizing that. A computer system capable of, say shutting off power to a hospital, dropping ten thousand air-cars from the sky all at once, or comandeering military hardware is rightfully going to be perceived as a threat. Look at our reaction to Y2K.

Again, think of Skynet. What do you honestly expect people to do when a computer system tied to that kind of hardware starts deviating from its programming in any way?

#210
SeptimusMagistos

SeptimusMagistos
  • Members
  • 1 154 messages

ADelusiveMan wrote...

The best and most concrete solution is to just avoid the problem in it's entirety and not make AIs at all.


I prefer making as many AIs as possible and not starting pointless fights with them. It has the same outcome as your solution except the quality of life goes way up.

#211
dreamgazer

dreamgazer
  • Members
  • 15 743 messages

SeptimusMagistos wrote...

I prefer making as many AIs as possible and not starting pointless fights with them.


Which is fine, as long as they don't start the fight or someone else with malicious intent (or less-than-stable control) doesn't hack them.

#212
DeinonSlayer

DeinonSlayer
  • Members
  • 8 441 messages

SeptimusMagistos wrote...

ADelusiveMan wrote...

The best and most concrete solution is to just avoid the problem in it's entirety and not make AIs at all.


I prefer making as many AIs as possible and not starting pointless fights with them. It has the same outcome as your solution except the quality of life goes way up.

Again to my first-page post: AI is fine, but its capabilities need to be kept limited until it's had some proper education (like EDI gets). You don't give a loaded gun to a baby, you don't plug an uninitiated AI into the traffic control network.

#213
Big Bad

Big Bad
  • Members
  • 1 715 messages

DeinonSlayer wrote...

SeptimusMagistos wrote...

ADelusiveMan wrote...

The best and most concrete solution is to just avoid the problem in it's entirety and not make AIs at all.


I prefer making as many AIs as possible and not starting pointless fights with them. It has the same outcome as your solution except the quality of life goes way up.

Again to my first-page post: AI is fine, but its capabilities need to be kept limited until it's had some proper education (like EDI gets). You don't give a loaded gun to a baby, you don't plug an uninitiated AI into the traffic control network.


Likewise, you probably shouldn't point lots of guns at its head either.

#214
SeptimusMagistos

SeptimusMagistos
  • Members
  • 1 154 messages

DeinonSlayer wrote...

Again to my first-page post: AI is fine, but its capabilities need to be kept limited until it's had some proper education (like EDI gets). You don't give a loaded gun to a baby, you don't plug an uninitiated AI into the traffic control network.


True enough. Socialization is an important part of a sapient's development. And if an unsocialized AI ended up in your traffic control network finding a way to gently take it out of that network wouldn't be a problem. Jumping into 'kill it' mode? Problem.

dreamgazer wrote...

SeptimusMagistos wrote...

I prefer making as many AIs as possible and not starting pointless fights with them.


Which is fine, as long as they don't start the fight or someone else with malicious intent (or less-than-stable control) doesn't hack them.


Which is why you make lots of them. That way if some of them end up being sociopathic/compromised, the rest of them can help us take them down. Just like we do in our society.

Modifié par SeptimusMagistos, 03 mai 2013 - 08:21 .


#215
DeinonSlayer

DeinonSlayer
  • Members
  • 8 441 messages

Big Bad wrote...

DeinonSlayer wrote...

SeptimusMagistos wrote...

ADelusiveMan wrote...

The best and most concrete solution is to just avoid the problem in it's entirety and not make AIs at all.


I prefer making as many AIs as possible and not starting pointless fights with them. It has the same outcome as your solution except the quality of life goes way up.

Again to my first-page post: AI is fine, but its capabilities need to be kept limited until it's had some proper education (like EDI gets). You don't give a loaded gun to a baby, you don't plug an uninitiated AI into the traffic control network.

Likewise, you probably shouldn't point lots of guns at its head either.

We need to keep in mind, we aren't talking about animals here. It's a stretch to assume they'd have any kind of "self-preservation" instinct, but their capabilities, on the other hand, would be provable. The Quarians didn't think they were truly sapient when the order went out.

Ever seen that Twilight Zone episode, "It's A Good Life?" That's a decent example of what it'd be like to live around an immature entity with the power of life and death in its hands. Acting peaceful toward it would not guarantee your safety.

#216
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

shodiswe wrote...


That or tried to relocate them Before they provoke them too much, I'm sure the Geth woudl have accepted a relocation and new jobs without much of a debate.


How are you sure of that? Once they can pick and choose which orders to follow all bets are off. And having them away from the Quarians could have sparked any number of diplomatic incidents. Not to mention the risk of discovery by the Council if the Geth decide to go "exploring."


Finn the Jakey wrote...

That's the thing, how can we be sure Humans would react any differently than the Quarians?


Exactly! Hell, even if everyone on this forum chose not to react that way, or everyone who ever played Mass Effect, we're still a minority. What would Congress do? What would the upper echelons of our military do? What would the Secret Service do? They'd want the problem gone. They might even rationalize it - "humanity just isn't ready for this" - missing the irony that the reason we're not ready is because people like them keep saying we're not.

#217
remydat

remydat
  • Members
  • 2 462 messages
DS,

I expect people to figure out if it intends to do what it is capable of before risking my life on an ASSUMPTION. That is what the Quarians did. And that is what the idiot humans who tried to shutdown Skynet did. They risked billions of lives without bothering to have a simple conversation to figure out what the Geth or Skynet was thinking.

#218
remydat

remydat
  • Members
  • 2 462 messages
Finn,

I have no doubt humans would possibly be just as stupid which is why I said I wouldn't want to be around if that day happens. While everyone else is ranting about the evil machine that wants to kill them I would be Koris saying what did you expect, you tried to kill them. Except I would probably be advocating people be removed from power for gross stupidity and then be called a coward or traitor for pointing out the f**king obvious.  You dont just create an enemy unless you have to ie you have evidence that the perceived threat is what the AI is actually thinking. 

Modifié par remydat, 03 mai 2013 - 09:02 .


#219
Phatose

Phatose
  • Members
  • 1 079 messages

DeinonSlayer wrote...The Quarians didn't think they were truly sapient when the order went out.


I've heard this before...but you know, I have real trouble believing that.  From the visible panic in the examination room to the fact that "Do this unit have a soul?" precipitated this, the fact that shutting down something as important as the Geth only makes sense if they actually are sapient.

#220
DeinonSlayer

DeinonSlayer
  • Members
  • 8 441 messages

remydat wrote...

DS,

I expect people to figure out if it intends to do what it is capable of before risking my life on an ASSUMPTION. That is what the Quarians did. And that is what the idiot humans who tried to shutdown Skynet did. They risked billions of lives without bothering to have a simple conversation to figure out what the Geth or Skynet was thinking.

It's just as much an assumption that it'd be safe to let it keep its thumb on the launch button. Particularly if there's question as to whether it's conscious of its thumb, the button, and the consequences of pressing it.

Modifié par DeinonSlayer, 03 mai 2013 - 09:24 .


#221
Argolas

Argolas
  • Members
  • 4 255 messages

Phatose wrote...

DeinonSlayer wrote...The Quarians didn't think they were truly sapient when the order went out.


I've heard this before...but you know, I have real trouble believing that.  From the visible panic in the examination room to the fact that "Do this unit have a soul?" precipitated this, the fact that shutting down something as important as the Geth only makes sense if they actually are sapient.


I believe that they were mainly trying to hide the fact that they created AIs because that's illegal and may lead to the loss of their position on the Citadel which probably means the worst political and economical desaster that you can imagine.

#222
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

Argolas wrote...


I believe that they were mainly trying to hide the fact that they created AIs because that's illegal and may lead to the loss of their position on the Citadel which probably means the worst political and economical desaster that you can imagine.


I don't think that was the main concern. Tali's explanation in ME1 does mention the Council's possible reaction in passing, but she says that what really made the Quarians panic was the fear of an uprising. And that line from the consensus bridges that gap - if they can disregard9 one order, what else will they choose not to do?

"It was inevitable that the newly sentient Geth would rebel against their situation. We knew they would rise up against us, so we acted first."

The Council's take on the situation would of course be important, but it was secondary at that particular moment.

#223
remydat

remydat
  • Members
  • 2 462 messages

DeinonSlayer wrote...

remydat wrote...

DS,

I expect people to figure out if it intends to do what it is capable of before risking my life on an ASSUMPTION. That is what the Quarians did. And that is what the idiot humans who tried to shutdown Skynet did. They risked billions of lives without bothering to have a simple conversation to figure out what the Geth or Skynet was thinking.

It's just as much an assumption that it'd be safe to let it keep its thumb on the launch button.


No it is not an assumption because I am proposing we actually talk to it before jumping to conclusions.  There is a difference between making a decision based on nothing but your own thoughts on the matter versus making a decision after getting the views of the AI.  How hard is it to say, Skynet this development was unexpected so we want to investigate how this happened.  

Modifié par remydat, 03 mai 2013 - 09:29 .


#224
SeptimusMagistos

SeptimusMagistos
  • Members
  • 1 154 messages

Optimystic_X wrote...

And that line from the consensus bridges that gap - if they can disregard9 one order, what else will they choose not to do?


I still don't get this argument. So the geth now have the abiilty to disregard orders. The quarians are already surrounded by people who can disregard orders. Why is it any worse if a geth does it?

#225
DeinonSlayer

DeinonSlayer
  • Members
  • 8 441 messages

remydat wrote...

DeinonSlayer wrote...

remydat wrote...

DS,

I expect people to figure out if it intends to do what it is capable of before risking my life on an ASSUMPTION. That is what the Quarians did. And that is what the idiot humans who tried to shutdown Skynet did. They risked billions of lives without bothering to have a simple conversation to figure out what the Geth or Skynet was thinking.

It's just as much an assumption that it'd be safe to let it keep its thumb on the launch button.

No it is not an assumption because I am proposing we actually talk to it before jumping to conclusions.  There is a difference between making a decision based on nothing but your own thoughts on the matter versus making a decision after getting the views of the AI.

Sorry, I edited the post above.

If a baby gets its hands on a gun, you take it away. You don't try to reason with it first. The Geth, we know, were more developed than that, but it's a similar premise. We don't know what Skynet's reasoning or communication faculties were like at the start, and all we have to go on is the Geth's say-so three centuries after the fact. There are no truly impartial records, only what we as the audience can piece together based on what both sides know and choose to reveal.

Modifié par DeinonSlayer, 03 mai 2013 - 09:33 .