Aller au contenu

Photo

Why do so many people want to lie to the star child?


  • Veuillez vous connecter pour répondre
200 réponses à ce sujet

#126
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Sisterofshane wrote...

The problem being that "Casper's" logic is CIRCULAR logic.  In order to prove what the starkid says is an ABSOLUTE truth, the premise of an AI destroying ALL organic life would need to have occured.  Apparently it didn't, because we are all still standing here.  So, can you justify the gruesome muder of countless of organic beings to satiate the desire of a godlike figure to "save" us from an inevitable ending that has never occured?


Yeah, precisely because as you say we are all still standing here.  The ending has never occurred because the Catalyst has prevented it.

#127
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

Unless, of course, we keep bettering ourselves, by, for instance, unifying the galaxy, learning from past mistakes, trading with the Geth and salvaging the Reapers tech (though not, hopefully, anything indoctrinating).


We're not talking about bettering ourselves in the sense of personal development and good deeds.  Left alone, the geth will evolve a higher form of intelligence than us.  If they're smarter than us, they'll be able to destroy us if they choose to do so.  Not acceptable.


So you say, without evidence. They've had three hundred years and they've...gotten to exactly the same place as the rest of the galaxy. 

For that matter, we know exactly how strong they are at this point (since Bioware helpfully gives it to us as a number). Hint, it's not equal to the rest of the galaxy put together.

By the way, how is this not also an argument for genocide against the Turians? They already have the ability to destroy Humanity! OMG WE MUST KILL THEM!

#128
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

Sisterofshane wrote...

The problem being that "Casper's" logic is CIRCULAR logic.  In order to prove what the starkid says is an ABSOLUTE truth, the premise of an AI destroying ALL organic life would need to have occured.  Apparently it didn't, because we are all still standing here.  So, can you justify the gruesome muder of countless of organic beings to satiate the desire of a godlike figure to "save" us from an inevitable ending that has never occured?


Yeah, precisely because as you say we are all still standing here.  The ending has never occurred because the Catalyst has prevented it.


Or because it's wrong. 

#129
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Jayce F wrote...

Engineer an air transmutable virus that kills all who it infects. Technically any human is capable of such with the correct knowledge.  By your logic, we should exterminate all virologists.


No, although there's certainly a case for exterminating hostile virologists.

#130
mauro2222

mauro2222
  • Members
  • 4 236 messages

CaptainZaysh wrote...

ecarden wrote...

Unless, of course, we keep bettering ourselves, by, for instance, unifying the galaxy, learning from past mistakes, trading with the Geth and salvaging the Reapers tech (though not, hopefully, anything indoctrinating).


We're not talking about bettering ourselves in the sense of personal development and good deeds.  Left alone, the geth will evolve a higher form of intelligence than us.  If they're smarter than us, they'll be able to destroy us if they choose to do so.  Not acceptable.


You're sick if you think like this in real life. Killing with "what if..." as reason. Yet, if someone does the same to you, it's wrong. Selfish much?

#131
SidNitzerglobin

SidNitzerglobin
  • Members
  • 661 messages

Sisterofshane wrote...


The problem being that "Casper's" logic is CIRCULAR logic.  In order to prove what the starkid says is an ABSOLUTE truth, the premise of an AI destroying ALL organic life would need to have occured.  Apparently it didn't, because we are all still standing here.  So, can you justify the gruesome muder of countless of organic beings to satiate the desire of a godlike figure to "save" us from an inevitable ending that has never occured?


The circular logic you speak of is exactly why it's pointless to try to discuss the ending as it stands today.  It's built on a logical non-sequitor.

The other big problem is the idea that synthetic life=preservers of order.  The nature of matter in our galaxy and the reason that time doesn't move backwards or sideways or all at once is entropy.  Synthetic life stands just as good of a chance to degenerate to chaos over time as organic.

Modifié par SidNitzerglobin, 20 mars 2012 - 03:30 .


#132
mauro2222

mauro2222
  • Members
  • 4 236 messages

CaptainZaysh wrote...

mauro2222 wrote...

It's the samem honey. The magnitude it's insignificant.

If the geth kill us, we cease to exist.
If I (geth) kill you, you (humanity) cease to exist.


Maybe that's the way you see it, but personally I think the death of the species would be worse than my own death.  I believe there are bigger things than whether I live or die.

(PS you don't get to call me honey until I've put my d**k in your mouth.)


It's amazing your ability to miss the point.

#133
ashdrake1

ashdrake1
  • Members
  • 152 messages

Sisterofshane wrote...

ashdrake1 wrote...

One of the most common complaints with the the conversation I see with the star child is we can't tell him that AI,s and organics can live together peacefully.  It's because we can't. 

Hold on you say.  I just brokered peace with the Geth and the Quarians.  Also me and EDI go way back, I hooked her up with my buddy.  You are way off base here Mr. deus ex god kid thing.  I want to paragon/renegade my first hand knowledge right in your stupid see through face.  Shepard can't do this, because Shepard knows that it's not the case.

Even if you want to use the Geth and Quarians as an example, it has been in effect for like a week tops.  That really isn't any sort of time to get conclusive data.  EDI is cool I guess, now that she is not murdering people on the moon.  She has shown she can be a great teammate, a friend and perhaps even a lover.  Very compelling reason's for a vote for harmony.  It is also a isolated case, one that is ongoing.  EDI is also unique in that she is the first AI you have encountered that you can truly reason with. 

Hold on you say.  The Geth are good guys, Legion is a stand up dude.  True, but this really has very little to do with Shepard.  The current Geth you can get to help you have pretty much always been stand up dudes.  You really have no impact on their outlook.  Long before Shepard came onto the scene they decided they did not want to wipe out the Quarians even though they had the option.  They did not like the Reapers before it became a issue with the rest of galaxy.  Your goals match theirs so they help you.  Shepard did nothing to affect this mindset with them.

Since the first game there has been no example of being able to reason with an AI.  Be it from the AI designed to steal from quasar machines to the reapers.  AI's can not be reasoned with.  Has Shepard has had little to no success in trying to make peace with a hostel AI.  The heretic Geth refuse to even try and engage in any sort of communication.  The  thief AI on the citadel commits suicide and makes the attempt to take Shepard with it.  The reapers don't seem real open to suggestion.  Also the most telling is the rouge AI on the moon, EDI. 

It makes no effort to try and establish a communication, it tries to kill you like everyone else on the base.  The only reason EDI is like she is, is because her software was modified.  We rewired her brain.  EDI is no longer who it was because it was re-written, no peace was made.  It was remade.

As per the lore in the games  the star child is right.  Eventually an AI will wipe everything out.  Because regardless of the lesson of the Geth, Organics keep messing around with AI.  See moon AI, citadel AI, EDI and project overlord.  At some point we will make one that figures out how to self replicate and it will and can wipe the universe clean.

This post is not a defense of the ending.  I have problems with the ending.  Mostly the Joker/Normady bit.  It is what I believe is a flaw in logic in the argument to correct  the ending.


Every argument you have mentioned is just proof in the other
direction.  In this cycle, we have managed to avoid or defeat AI several
times over from being able to obliterate us.  Even Javik talks about in
his cycle, they were WINNING the war against synthetics until the
reapers showed up.

The problem being that "Casper's" logic is CIRCULAR logic.  In order to prove what the starkid says is an ABSOLUTE truth, the premise of an AI destroying ALL organic life would need to have occured.  Apparently it didn't, because we are all still standing here.  So, can you justify the gruesome muder of countless of organic beings to satiate the desire of a godlike figure to "save" us from an inevitable ending that has never occured?



Man I see circular logic stated alot on these forums.  We have zero evidence that the starkid is incorrect because in all of our experience with AI, violence is the only way to change their views.  We keep making AI and with the data we have he is correct.

#134
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

So you say, without evidence. They've had three hundred years and they've...gotten to exactly the same place as the rest of the galaxy. 


Because synthetic intelligence is only limited by processing power, not biological limits, as they add processing power their intelligence will necessarily evolve beyond our own.  The geth are actively pursuing this outcome; it's the reason they're building that Dyson Sphere. 

#135
Kakita Tatsumaru

Kakita Tatsumaru
  • Members
  • 958 messages

CaptainZaysh wrote...
That's a hell of a gamble.  You'll understand why I felt it would be irresponsible to take it.

On a most realistic note, would you call for nuclear fire on Iran before they gets the bomb themselves, as the probability for this is not zero and they may become dangerous in the future.
Personally I would let them a chance.

#136
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

mauro2222 wrote...

You're sick if you think like this in real life. Killing with "what if..." as reason. Yet, if someone does the same to you, it's wrong. Selfish much?


Maybe it's because I've had a small amount of military experience, but yeah, I'm perfectly okay with killing someone I suspect is a threat.  Especially if, like the geth, if I wait for them to prove they're a threat it will be to late to do anything about it except roll over and die for them like a good ape.

#137
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Kakita Tatsumaru wrote...

On a most realistic note, would you call for nuclear fire on Iran before they gets the bomb themselves, as the probability for this is not zero and they may become dangerous in the future.
Personally I would let them a chance.


Let's not get too deep into real world politics in here.

#138
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

So you say, without evidence. They've had three hundred years and they've...gotten to exactly the same place as the rest of the galaxy. 


Because synthetic intelligence is only limited by processing power, not biological limits, as they add processing power their intelligence will necessarily evolve beyond our own.  The geth are actively pursuing this outcome; it's the reason they're building that Dyson Sphere. 


And organic life is limitted by...what? All Alliance soldiers get basic enhancements, then there's biotic implants, always increasing in power and utility, new cybernetics, new gene therapies and improvements.

What makes you think that the Geth are any more successful on that front than anyone else?

They split off three centuries ago, but their current tech level is the same (baring that heat-sink retcon nonsense) as the rest of the galaxy. The notion that they're racing ahead, or inherently faster is just not supported.

ETA: Typo correction. 

Modifié par ecarden, 20 mars 2012 - 03:37 .


#139
mauro2222

mauro2222
  • Members
  • 4 236 messages

CaptainZaysh wrote...

mauro2222 wrote...

You're sick if you think like this in real life. Killing with "what if..." as reason. Yet, if someone does the same to you, it's wrong. Selfish much?


Maybe it's because I've had a small amount of military experience, but yeah, I'm perfectly okay with killing someone I suspect is a threat.  Especially if, like the geth, if I wait for them to prove they're a threat it will be to late to do anything about it except roll over and die for them like a good ape.


Ahhh... military indoctrination, sorry, training.

You still think that you are allowed and others are not. This reminds me of that Batman movie, with the people aboard the ferries.

#140
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

mauro2222 wrote...

You're sick if you think like this in real life. Killing with "what if..." as reason. Yet, if someone does the same to you, it's wrong. Selfish much?


Maybe it's because I've had a small amount of military experience, but yeah, I'm perfectly okay with killing someone I suspect is a threat.  Especially if, like the geth, if I wait for them to prove they're a threat it will be to late to do anything about it except roll over and die for them like a good ape.


By this thinking wouldn't, well, everyone who knows you be justified in killing you?

#141
Jayce

Jayce
  • Members
  • 972 messages

CaptainZaysh wrote...

Jayce F wrote...

Engineer an air transmutable virus that kills all who it infects. Technically any human is capable of such with the correct knowledge.  By your logic, we should exterminate all virologists.


No, although there's certainly a case for exterminating hostile virologists.


Weren't you saying something about acting before they have a change of intent?

#142
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

And organic life is limitted by...what? All Alliance soldiers get basic enhancements, then there's biotic implants, always increasing in power and utility, new cybernetics, new gene therapies and improvements.


"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. "

Organic intelligence is limited by brain capacity.  Artificial Intelligence is limited by software and hardware.  As the AI gets smarter, it develops better software and hardware, until it rivals our own intelligence (current geth).  As they continue to improve (by building their Dyson Sphere), they can make better software and hardware than we could, building their own replacements.  An intelligence explosion occurs, and us poor dumb apes had better hope they like having us around.

#143
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

And organic life is limitted by...what? All Alliance soldiers get basic enhancements, then there's biotic implants, always increasing in power and utility, new cybernetics, new gene therapies and improvements.


"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. "

Organic intelligence is limited by brain capacity.  Artificial Intelligence is limited by software and hardware.  As the AI gets smarter, it develops better software and hardware, until it rivals our own intelligence (current geth).  As they continue to improve (by building their Dyson Sphere), they can make better software and hardware than we could, building their own replacements.  An intelligence explosion occurs, and us poor dumb apes had better hope they like having us around.


Did you not even read the bit about enhancing ourselves? And, again, this is all extrapolation. What we know is that, within the Mass Effect Universe, they've had 300 years of solid work and they're essentially as strong as the Quarians who've had 300 years of wandering the galaxy (OH AND ARE ORGANIC).

#144
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

By this thinking wouldn't, well, everyone who knows you be justified in killing you?


Certainly every enemy soldier would be perfectly justified.  If you were arguing that all my friends ought to kill me out of self-preservation then I'd tell you to go look up the slippery slope fallacy.

#145
Jayce

Jayce
  • Members
  • 972 messages

CaptainZaysh wrote...

ecarden wrote...

By this thinking wouldn't, well, everyone who knows you be justified in killing you?


Certainly every enemy soldier would be perfectly justified.  If you were arguing that all my friends ought to kill me out of self-preservation then I'd tell you to go look up the slippery slope fallacy.


Quoted for irony.

#146
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

Did you not even read the bit about enhancing ourselves?


I did.  Do you not see the difference between tweaking the brain and the unlimited potential of EDI's intellect?

ecarden wrote...
And, again, this is all extrapolation. What we know is that, within the Mass Effect Universe, they've had 300 years of solid work and they're essentially as strong as the Quarians who've had 300 years of wandering the galaxy (OH AND ARE ORGANIC).


D'you think that Dyson Sphere they're building is just a place they can all hang out in?

#147
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Jayce F wrote...

CaptainZaysh wrote...

ecarden wrote...

By this thinking wouldn't, well, everyone who knows you be justified in killing you?


Certainly every enemy soldier would be perfectly justified.  If you were arguing that all my friends ought to kill me out of self-preservation then I'd tell you to go look up the slippery slope fallacy.


Quoted for irony.


That's a helpful contribution, Jayce.

#148
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

By this thinking wouldn't, well, everyone who knows you be justified in killing you?


Certainly every enemy soldier would be perfectly justified.  If you were arguing that all my friends ought to kill me out of self-preservation then I'd tell you to go look up the slippery slope fallacy.


Well, most murders are committed by people who know the victim and they know you and with your military training, if you struck first, they'd have no chance. THEY MUST KILL YOU TO BE SAFE. Oh, and everyone who would take revenge (or arrest them) for killing you. And everyone who would try to stop them. Or take revenge on them for killing all those people...

KILL EVERYONE! IT'S THE ONLY WAY TO BE SAFE!

ETA: If it's not clear, this is sarcasm. Don't kill everyone. Genocide isn't cool, um-kay?

Modifié par ecarden, 20 mars 2012 - 03:56 .


#149
AntAras11

AntAras11
  • Members
  • 94 messages

ashdrake1 wrote...

 
As per the lore in the games  the star child is right.  Eventually an AI will wipe everything out.  Because regardless of the lesson of the Geth, Organics keep messing around with AI.  See moon AI, citadel AI, EDI and project overlord.  At some point we will make one that figures out how to self replicate and it will and can wipe the universe clean.



That's a huge leap of logic. Every AI related incident you described only shows that AIs are capable of harm. How to you jump from that to "Eventually an AI will wipe everything out"?
Every time we encounter a hostile AI they either operate under some logical falacy or problematic data. If anything, the message we get following the geth-quarian storyline through all 3 games is the exact opposite of godchild's assumption.

The other big problem is motive. Why would a super-intelligent AI decide to wipe out all organic life (including snails)? The only answer I can come up with:
-Organic life HAS to be destroyed, it is in some way better for the universe.
-AI inevitably advances to a level o intelligence where they realize that fact.
-It can't be wrong AND inevitable at the same time.

If so, why bother? Let the synthetics do their thing. This also negates the Reapers motivation.

#150
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

Well, most murders are committed by people who know the victim and they know you and with your military training, if you struck first, they'd have no chance. THEY MUST KILL YOU TO BE SAFE. Oh, and everyone who would take revenge (or arrest them) for killing you. And everyone who would try to stop them. Or take revenge on them for killing all those people...

KILL EVERYONE! IT'S THE ONLY WAY TO BE SAFE!

ETA: If it's not clear, this is sarcasm. Don't kill everyone. Genocide isn't cool, um-kay?


That's called the slippery slope fallacy, ecarden.  If you can't tell the difference in threat profiles between an average Westerner who's served in his country's military, and a race of billions of heavily armed robots actively working to eclipse our military/industrial capabilities, then I'm not capable of explaining it to you.