Aller au contenu

Photo

The Quarians' Fate


  • Ce sujet est fermé Ce sujet est fermé
23 réponses à ce sujet

#1
HiroVoid

HiroVoid
  • Members
  • 3 669 messages
How do you feel about what happened to the Quarians with the Geth?  Do you'll feel they got what they deserved?  Do you feel the Council should have helped the Quarians when they asked for help?  Do you think the Geth were justified in slaughtering the Quarians after the Quarians tried to destroy them?  Feel free to discuss, and try not to insult others' opinions.

#2
FearTheLiving

FearTheLiving
  • Members
  • 540 messages
I don't think they should have tried to kill them off so soon because even if the Geth eventually turned on the Quarians it may have ended the same way (moving to the Flotilla), so yeah. I can see both sides of the story though and how either could have been the right choice.

#3
DarthCaine

DarthCaine
  • Members
  • 7 175 messages
They got what they deserved. Synthetics are people too

#4
RyuKazuha

RyuKazuha
  • Members
  • 402 messages
They dug there own grave back then, since they decided to try to destroy the geth onc they got sentient. War may have come over the Quarians either way, but freeing the Geth i think would've been the more ethical choice. Once something is sentient, you can't ignore that for the reason of them being synthetic beings. So actually the Quarians attempted genocide against the Geth and got their share for doing so.

#5
Captain Jazz

Captain Jazz
  • Members
  • 421 messages
They panicked, they got stupid, they shot first, got their arses handed to them and are just getting around to asking questions now. What they did was terrible, they were still thinking of the Geth as things, glorified tools, and they treated them as such. No wonder the geth hate us biologicals.

There is definitely a message to take from this - show all intelligence the same respect, artificial or not (and maybe they won't decimate your race and force you to live in interstellar caravans.)

#6
mewarmo990

mewarmo990
  • Members
  • 227 messages
IMO, the Geth should never have been allowed to exist, but they do, and there's nothing anyone can do about that now. The Quarians got what they deserved for attempting to commit genocide against a sentient race that were entirely responsible for creating.

#7
Kimarous

Kimarous
  • Members
  • 1 513 messages
The problem is that it's the nature of mortals to regard the non-living as objects, not people. Yes, there are exceptions, especially in extreme circumstances (WILSON!). Still, how do you think we humans would react if every PC in the world suddenly started working on it's own? I highly doubt we'd simply accept this turn of events; odds are we'd try to "reboot" them back to the metaphorical Stone Age, just like the Quarians with the Geth.

#8
Spitfire80

Spitfire80
  • Members
  • 196 messages
I think it would have been a tough decision for the council to send in troops, to fight on behalf of the quarians. But they should have done it anyway, not to fight the Geth into oblivion but to prevent the quarians to suffer that fate. I mean millions of lives lost, worlds lost and the surviving living beings shipbound for 300 years. At least give them a planet.

#9
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests
I think what happened to the quarians was extremely unjust. They did not intentionally create A.I. on such a wide scale. The development of the geth was an accident. However artificial intelligence was already highly regulated and illegal by this point, meaning that the Council and everyone else already knew how dangerous A.I. could be. This is important because it completely validates the quarians actions. Out of the blue they were faced with potentially millions of A.I. and each and every one of them integrated intimitely with quarian infrustructure and military. The quarians knew then that if the geth became violent that they would be a threat to the very survival of the quarian species. Faced with that fact how can anyone condem the quarians for attacking? Anyone who does so isn't thinking about this realistically. When that much is at stake you cannot afford to take chances. The entirety of their people, their history, their culture, and their future was at stake. Of-course the quarians attacked first, they had to in order to have any hope of winning the resulting conflict.

Had the quarians waited it is possible that the geth would have turned out to be peaceful. There would have still been a huge distruption of quarian society I imagine, but perhaps the bloody war and resulting exile (or extinciton) could have been avoided. However, if the quarians had taken the peaceful route and the geth had become violent anyway, which is also possible, then the quarians may not even exist today. Remember that the quarians made the first move as soon as they realized the geth were becoming sentient and they still lost. What does that mean? What it means is that the geth were highly dangerous and capable of doing a great deal of damage to the quarian civilization. You cannot ignore this fact. Whethere the quarians provoked the geth or not this proves without a doubt that the quarians were rightfully afraid of what the geth could do. Ultimately only a small fraction of the entire quarian species survived the conflict by evacuating into space. Had the quarians tried the peaceful option only to have that fail it is possible that far fewer quarians would have escaped, or even none. 

You might say, "Well the quarians could try for peace whilst still evacuating people, thus saving even more lives if the geth take the first aggressive action." This sounds nice until you really think about it. If the quarians started evacuating people the geth could easily interpret that as a hostile act. After all, for all the geth know the quarians are evacuating so that they can go to war with the geth and minimize quarian casualties. As such to maximize your survival you must attempt to shut down the geth whilst simulatiously evacuating civilians. You can't just evacuate and continue negotiating with the geth. You see, even a defensive posture can be considered aggressive. If you try to negotiate whilst looking like you fear and are prepared to fight the other party you may come across to them as being incinsere. The other party may feel you are just buying time whilst you strengthen your position and then go on the offense.

Do I blame the geth for defending themselves? No, of-course not. Though I do not and will never consider any mechanical "being" to be alive, I think it was perfectly natural for a machine (or an animal) with basic self-preservation programming to defend itself.

However one party I do blame above all is the Council. What they did was callous, short-sighted and hyprocritical. They knew that A.I. were dangerous, after all, they had banned them. Thus how could they rightly blame the quarians for taking the initiative and trying to deactivate the geth before more of them "awakened"? Would the Coucnil have done anything differently? I highly doubt it. As the quarians were an associate member of the Citadel, bound by its laws and treaties, the Council was also bound to them as a protector. To avert genocide and to protect Citadel Space the Council was obligated to intervene militarily to bring the geth into submission. Instead, to avoid having to fight a costly war and to maintain their long standing "peace", they threw the quarians under the bus. This insured that the geth would be free to develop in secrecy behind the Perseus Veil and it also condemned the quarians to a slow, homeless, extinction. 

What the Council should have done was send in its fleet, destroy or otherewise disarm the geth, return the quarian worlds to the quarian people, and then severely penalize the quarian government for its negligence. Heavy taxes, fines, and a loss of sovereignity would have individually or collectively all been justifiable consequences for the quarians' mistake in allowing the geth to develop true artificial intelligence. In doing this the Council would have saved the quarian civilization and species and also neutralized a grave threat to the galaxy.

#10
Marioninja1

Marioninja1
  • Members
  • 40 messages
I think it was a mistake to create the Geth, but the Council should've helped out when it wasn't as out of hand as it is now.

#11
Willowhugger

Willowhugger
  • Members
  • 3 489 messages
I sympathize with the Quarians.



The people who tried to shut off the Geth died and the rest of their race died with it.



It's like blaming me for what the USA President did.

#12
TudorWolf

TudorWolf
  • Members
  • 1 120 messages
I dunno...the Quarians seem to overreacted really. It's only natural the Geth would fight back.



Tali's reaction to this if you talk to her about it is most telling. She gets very vocal if you accuse the Quarians of what was effectively attempted genocide. Tbh I was rather shocked by the way she reacted, I had picked the neutral dialogue (I think) by accident and next thing I know Tali is going nuts

#13
zuluthrone

zuluthrone
  • Members
  • 21 messages
I can't believe the sympathy for artificial life in this forum, if only for the true threat they posses. Imagine a species that evolves over generations of milliseconds, whose only needed resources are electricity and metI can't believe the sympathy for artificial "life" in this forum; are you not aware of the threat they posses? Imagine a species that evolves over generations of milliseconds, whose only needed resources are electricity and metal, and whose rate of technological development makes Moore's law appear to move at the speed of a glacier. Existence is simple for the artificial; it's created based upon the perfect and aged system of Cartesian mathematics, abstracting all of the experiences of the universe into a simple state machine that runs for infinite time.



By contrast, organic life is a constant cycle of of creation and destruction that battles hundreds of variables trying to make death an absolute certainty. We must come to grips with our meager contribution to an unending timeline, and so we strive to do great work. Yet all The AIs know is infinity. What motivation can they have? They'll never feel satisfaction, never feel pain, never have a soul. Along this line, the only value they might have in this world is how well they serve their creators. Furthermore, I'm offended by the description of the AIs as living. They are no more alive than an algorithmically generated screen saver, and can be put to death just as rapidly when it fails to accomplish your tasks.



Perhaps you don't accept that we deserve full control over our creations; perhaps you believe their "will" is as legitimate as ours. Imagine, then, your human spirit provided with the endless resources of an electronic existence. You have the capacity to both realize and process the entirety of the universe. The events of one lifetime become like atoms in a star, meaningless individually and only important in how it interacts with the whole. And from the perspective of your machine mind, the affairs of the universe could only operate properly under synthetic control as biological systems are both inefficient and illogical. Even the Asari with their wait-and-see attitude of appeasement would be upset by their loss of importance in the grand scheme (as that's all they care about. Hell, maybe they'd even surrender the citadel to them if it was their choice alone).



The Geth seemed to have trumped even my imagination by converting our dead into their own cybernetic kamikaze warriors. This clearly shows that AIs cannot be reasoned with. They have no respect for life, know nothing of the impact of war, and see you and me as nothing more than ore for building more soldiers.



Let's role play for a second here. Imagine you're living a hard yet respectable life as the son of a frontiersman in the american west. You were just a child as your family tugged you along, but had to grow up very quickly on the hard terrain. Then, one day, tragedy strikes as Indians raid your homestead and mortally wound your father. You are the patriarch now, and your family's lives all rest on your shoulders. Then after time, just as life seemed to be getting back to normal, you're raided again, but this time your father is with them, resurrected, and bearing their markings. Your guard is immediately dropped as you run out to see him, reverting back to the child lost long ago, calling out "dad" repeatedly. He doesn't share your sentimentality, however, staring back at you with dead eyes, never breaking from his zombie-like resolution in trying to destroy you and everything you have. Now imagine raising your gun to the abomination that was your father but having to stare into face you once loved.



This is what artificial life is capable of. When the machine bites the hand of it's master, it must be put down. Without exception.al, and whose rate of technological development makes Moore's law appear to move at the speed of a glacier. Existence is simple for artificial life; it's created based upon perfect and aged systems of Cartesian mathematics, abstracting all of the experiences of the universe into a simple state machine.

#14
Captain Jazz

Captain Jazz
  • Members
  • 421 messages

zuluthrone wrote...

Perhaps you don't accept that we deserve full control over our creations; perhaps you believe their "will" is as legitimate as ours.



Do we deserve full control over our children too? If you don't want to give your creations rights, don't make them sentient... The will of a sentient being is legitimate, no matter what it is made of.

#15
zuluthrone

zuluthrone
  • Members
  • 21 messages
A child is in no way comparable to a computer program, no matter how advanced it is. Any resemblance to the love and needs of a child come only from mimicry. Emergent behavior from machines is not sentience, but inconvenience. Their perspective will not be useful to humans. The reapers have demonstrated the ultimate conclusion of artificial-decision-making.

#16
Captain Jazz

Captain Jazz
  • Members
  • 421 messages
Sentience and inconvenience are not mutually exclusive... this is probably some kind of fallacy, but I'm sure it was inconvenient that slaves were sentient too and I'm sure their perspectives were not useful to plantation owners.



The comparison of child to AI was about the fact that children are entirely the creation of their parents, not that they can love you and have needs.



The actions of a VI are simple mimicry based on programming and any unexpected behaviour there can be put down to malfunction, but an AI has passed this point, hence the distinction between the two.



The reapers demonstrate AN ultimate conclusion (actually, I suspect that it is neither ultimate nor conclusion, but that's another discussion.) but not THE ultimate conclusion. Time for another fallacy on my part; did Hitler demonstrate the ultimate conclusion of natural decision making? No, just one possible conclusion. I don't know the full history of the Reapers, so I could not explain the logic or motivation behind their actions, but the Geth were slaves, they were treated as such and when they showed signs of sentience, the Quarians tried to exterminate them, I can see the logic in wishing to destroy sentient life there. The AI on citidel too only experienced negativity from organic life - used as a tool, then it's user attempted to destroy it upon discovering it's sentience, I can understand the logic there too.

Show me the AI whose presence was welcomed, whose input was appreciated and who was not instantly threatened with termination upon emergence, show me that this AI too is genocidal and I'll concede the point that an AI is automatically a threat... but it's still sentient.

#17
zuluthrone

zuluthrone
  • Members
  • 21 messages
I understand that we are the creators of both AIs and children, but I find it very important not to group these as the same by any means, especially in the treatment of and interactions with such creations.



Next, where do we draw the line for when these machines are sentient? A learning computer is fed situations to analyze, judges them based on heuristics of advantage or disadvantage, and archives the choice as a potential result in a situation. How much control of these do they need to be sentient? If a computer can control both what it chooses to process and what heuristics to base them off of (probably in a strange, double blind, recursive data heuristic swap) then there is no predicting how it will react to any stimulus, no reliable data that can be returned from it, and no certainty of how safe we are in its presence. They lack the ties to organic life that allow us to communicate across borders. Instead they have more efficient networking protocols than our spoken languages and will organize faster than we can be ready for them. They will have us flanked and, if they chose, bending to their understanding of the world. They'll never know love, suffering, pain, or death as we do and no decision will come with those in mind. If they're afraid to die, it's either because a human told it to be afraid to die, or because it would rather not lose a piece of its collective processing abilities.



Even if we showed compassion to the computer it would not feel it. It can't. And if it asked for peace, all we can suspect is that it is testing human behavior for it's own means. Perhaps this communication could lead to peace, for a time. But when an AI has infinity to process the state of man without a sense of his needs and behaviors outside of collected statistics, mankind and AI will not peacefully coexist, and we will have already surrendered all power to them. I pray that if this happens in a human city, that we will have the foresight and quickness to correct the problem before it starts.



I fear the same type of behavior will arise in humans as well if project lazarus becomes available to the wealthy elite of mankind. As a lifespan reaches infinity, temporary beings become pawns for the master plans of the elder ones. People will just sit and wait as long term plans pan out, much like the asari on their scale and the reapers on theirs. Only with AIs, the takeover can nearly instantaneous.

#18
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

zuluthrone wrote...

I can't believe the sympathy for artificial life in this forum, if only for the true threat they posses.


Have you ever seen the Animatrix? There's a segment in there called The Second Renaissance. In it humans develop artificial intelligence enmass and when those A.I. become violent the humans do the unthinkable: they acquiese to their demands. This allows the machines to build an fortify a city and then to flood the global market with cheap goods, wrecking the global economy. The humans then attack the machine city but the machines survive, fight back, and subjugate humanity.

Of-course people will never cease to argue that the machines have rights and that the 'lesson' from the segment is that we should have been kinder. However this ignores the fact that the humans were merciful. They allowed the machines to flee society and live in peace by themselves. However the machines wound up destroying the economy and, curiously, it turned out they were well prepared and armed to fight a global war. I say that the machines orchestrated the conflict, betting on bleeding hearts like the people on this forum to buy them time.

As to how this relates to the geth? That should be obvious.

The point is that machine life is and always will be a threat to human life or any other organic being. What sets us apart, what allows us to thrive, what gives us purpose, what has determined our ecological niche... is our intelligence. Once we create a more intelligent lifeform than ourselves we will have made our species obsolete. To the hyper intelligent and impossibly fast reasoning machine we will be a burden. Either suddenly or gradually we'll be phased out.

Violent, bloody conflict between the quarians and the geth was probably inevitable. We should be thankful that the quarians hit first else none of them would have been alive to aid us later.

Modifié par Shandepared, 21 janvier 2010 - 10:10 .


#19
Ardonia

Ardonia
  • Members
  • 102 messages
Based on your logic organic life is a threat to any other organic life so for humanity to survice we need elimanate these threats with out mercy.



I fear what AI might become ,but I Also fear what humanity will with our worlds resourses becoming less and less day by day and nations turn on each other to secure these resources, But I also think peace with AI is possible and wopuld lobby for it. However this does not mean I wouldn't consider destroying AI's if war did brake out.

#20
Ardonia

Ardonia
  • Members
  • 102 messages
Double post sorry

Modifié par Ardonia, 04 mars 2010 - 12:58 .


#21
MyChemicalBromance

MyChemicalBromance
  • Members
  • 2 019 messages
How do you insult an opinion?

#22
Zaxares

Zaxares
  • Members
  • 2 097 messages

Shandepared wrote...

Have you ever seen the Animatrix? There's a segment in there called The Second Renaissance. In it humans develop artificial intelligence enmass and when those A.I. become violent the humans do the unthinkable: they acquiese to their demands. This allows the machines to build an fortify a city and then to flood the global market with cheap goods, wrecking the global economy. The humans then attack the machine city but the machines survive, fight back, and subjugate humanity.


I actually came away with a different conclusion from watching that. When A.I.'s first became sentient, humanity, just like the quarians, attempted wide-scale and systematic repression and destruction of those A.I.'s. As a result, most A.I.'s fled human-controlled countries, and eventually settled in an unwanted region of the Earth (I think it was somewhere in the Middle East, if I recall correctly), where they built their first city, Zero One.

For a time, everything was peaceful. The humans were content to leave the machines in their own region, and the machines set about constructing their own society. But as Zero One developed its own economy and began interacting and trading with human society, people soon discovered that A.I.-built machines and technology were FAR superior and MUCH cheaper than anything humans could produce. Free market economics and human nature then resulted in the only possible outcome; everybody started buying machine goods over human ones. This resulted in widespread economic disruption in human nations as human industries simply could not compete with the machines. Thousands, perhaps millions, of humans were laid off, and all these jobless, hungry, angry humans demanded action.

As a result, humans declared open war on the machines. This was understandable; the machine society was a threat to the human one, and the quickest and most effective way to eliminate a threat is to destroy it. The machines tried to broker a peace by sending an envoy to the U.N. with a proposed plan for a united economic and social blueprint incorporating both societies, but the humans refused to listen to them and went ahead with the war. Ultimately, the humans lost that war, and humanity ended up as living batteries integrated into the Matrix.

Perhaps things would have been different had the humans listened to and accepted the machines' plan, but I doubt they would have. Ultimately this whole situation boiled down to one simple law of nature; the machines were outcompeting humanity, because they were innately superior. And if there's one thing I know about humans, it's that they can't stand the thought of being inferior to anyone or anything else. Even if humanity had accepted the machines'  proposal for a peaceful coexistence, it would have been a machine-dominated one, and I think humanity would have slowly, but inexorably, died out over the centuries to come.

I foresee this same scenario playing out with the geth in the Mass Effect universe, unless the geth somehow agree to restrict their own advancement/expansion so that they will never be a threat to organic life. And I don't see that happening. The only way that both organics and synthetics can live in peace is if they segregate themselves, and never, EVER interact with each other. In that sense, it was actually EXTREMELY wise of the geth to isolate themselves behind the Perseus Veil, and destroy anyone who tried to contact them.

#23
AlloutAce

AlloutAce
  • Members
  • 73 messages
I hate to burst some peoples bubbles, but a organic sentience and a Mechanical sentience are very comparible. A geth has thousands of software programs running thousands of scripts to create a mirrored intelligence, how does this compare to humans? In the human brain you have Billions upon billions of neurons, Astrocytes, Microglia and other cells that are all using chemicals, like cogs in a machine(geth are machines). They may be based off different elements but they have both achieved the basic need for sentient life. Our brains, and a mechanical computer are very different things, but they accomplish the same goals. We pride ourselves on human emotion, but those feelings are simply chemical impulses in the limbic system, something a mechanical sentience lacks. Therefore besides emotion, organic and mechanical sentience are comparable in almost every way.

If organics are willing to commit genocide out of fear, how does that seperate you from the Waffen SS, or the US Rangers during manifest destiny?

Im not saying we shouldn't condone the actions of the geth, slaughtering the innocents wholesale was unforgivable, but in a survival situation it is atleast understandable.

The quarian assault on early geth was a classic example of irrational fear. Did the first geth sentient units show any type of violent behavior? No, they were just beginning to understand the world around them, and because of the quarians actions, have a very skewed outlook on how organics act. Even after the war, if you talk to legion he mentions that the geth take care of the graveyards and holy places of the quarians. Does that seem like something a genocidal, organic-despising species would do. Personally i think the choices that you made in ME2 will bring this in one of 2 directions, extinction for the quarians, or a profitable peace between geth and quarians.

#24
Zaxares

Zaxares
  • Members
  • 2 097 messages

AlloutAce wrote...

Therefore besides emotion, organic and mechanical sentience are comparable in almost every way.


The differences between organics and synthetics run much deeper than that, I believe. All known organic races, regardless of their cultures or laws, share several key similarities. They all need to eat. They all need to sleep. They all desire (to some extent) to reproduce. They all seek a sense of happiness, or at least satisfaction, even if they pursue them in different ways. As a result, organic races can at least understand, on a fundamental level, the needs and drives of another organic race.

A synthetic race shares none of these goals and urges. They never need to eat or sleep, so they do not understand the concept of hunger or fatigue. At best, a synthetic would see an organic's need for such things as curious, and at worst, highly inefficient. Similarly, if synthetics cannot feel emotion, how could they possibly understand the basic urges that drive all organic races? Much like Shepard trying to comprehend the Cipher of the Protheans, the mindset of a synthetic is likely to be so far removed from an organic being's that it simply cannot fathom why an organic would act in certain ways (trying to save an infected population from a plague, for example, when it would be much more efficient to simply quarantine the entire planet and wait until the plague had faded, regardless of how many casualties it would result in, rather than risk it spreading to another planet.)

Given such a huge chasm of differences between the two, I very much doubt that complete social integration between synthetics and organics is possible. There could still be peace, and I personally do believe that peace between the quarians and geth is a much better alternative than violence, but I just feel that the gulf between the two is too wide to ever be fully bridged.