Aller au contenu

Photo

Do the ends justify the means? *Discussion*


  • Veuillez vous connecter pour répondre
529 réponses à ce sujet

#151
Guest_Saphra Deden_*

Guest_Saphra Deden_*
  • Guests

Swampthing500 wrote...

Soverign will have control in a few moments because Saren will transfer control. It hasn't actually been done yet.


Or he'll have control in a few minutes because Saren already did what he needed to do and Sovereign is already in the system. After all, might that not be why he had to dock with the Citadel tower itself?

What if I'm right? What if you're wrong?

That's the problem here. You assume you're right and because of that you're prepared to take a horrible gamble that will result in the Reapers returning if you're wrong.

I'll play it safe and make damn sure the Reapers aren't getting through the relay before I worry about anything. As long as we stop the Reapers now we me may have months or years or centuries before the Reapers return. 

The point is, we'll have time.

#152
Swampthing500

Swampthing500
  • Members
  • 220 messages


What from 9.50 onwards and Vigil tells you why Saren is needed.

If Sovereign had already taken control of the station, he does not need to possess Saren's body to kill you. All he needs to do is to wait and bypass the program given by Vigil and bring the Reapers in.

Since he takes control of Saren, it means that control as not been manually transfered. He needs to kill Shepard and use the body to transfer control.

Modifié par Swampthing500, 25 décembre 2011 - 05:11 .


#153
Guest_Saphra Deden_*

Guest_Saphra Deden_*
  • Guests
Yes, and I'm telling you that by the time you reach Saren his job may already be done.

#154
Swampthing500

Swampthing500
  • Members
  • 220 messages

Saphra Deden wrote...

Yes, and I'm telling you that by the time you reach Saren his job may already be done.


Then why possess Saren?

#155
wizardryforever

wizardryforever
  • Members
  • 2 826 messages

Saphra Deden wrote...

wizardryforever wrote...

The point is that people see you let their heads of state die when you could have saved them.


...and I'm saying I don't ****ing care what they see because at least they'll be alive. You'd rather risk everyone dying just to save face.

Let me reiterate once again, the future matters.  Yep, what happens in the next few minutes is no more important than what happens ten years from now.  If I knew I could stop myself from dying in ten years (like a decision to start doing drugs, for example), I'd do it, regardless of how it might affect my immediate future.  If that "saving face" you sneer at happens to save the galaxy down the line when the whole Reaper fleet arrives, then yes, I'll risk a few ships to see that happen.  The whole "risking the galaxy" argument holds just as much water if you let the Council die.  It's just as risky as saving them.  Being alive for a few more months, only to be doomed to failure because you were too pessimistic to save what could be a crucial resource, is not exactly the best outcome.

You aren't worth writing out a long rebuttal. I've posting countless such rebuttals to delusional and desperate arguments like yours countless times in the past.

Aw, I love you too!  :kissing:

Seriously though, delusional and desperate?  I'm willing to write long rebuttals with easy-to-follow logic (and you're not), but I'm the desperate one?  :lol:

#156
RowanCF

RowanCF
  • Members
  • 145 messages
 It all depends on the net outcome. In other words, was more harm done than good or the other way around. A simple example to illustrate this is I wouldn't kill two people to save one person, but I would kill one person to save two people.

#157
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

Swampthing500 wrote...

Saphra Deden wrote...

Yes, and I'm telling you that by the time you reach Saren his job may already be done.


Then why possess Saren?


He possesses Saren after you make the decision.  Without metagaming, Shepard has no knowledge of this.  Therefore you cannot use it to justify the real-time decision.

#158
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

RowanCF wrote...

 It all depends on the net outcome. In other words, was more harm done than good or the other way around. A simple example to illustrate this is I wouldn't kill two people to save one person, but I would kill one person to save two people.


That is unwaveringly utilitarian and the ultimate problem with utilitarianism.  You would actually murder someone to save two other people who were destined to die otherwise?  The person you are murdering, I'm assuming by your argument, is innocent by default...so why does he/she deserve that destiny rather than the two people you are saving?  Also, the consequentialism here is a problem.  So you are basically saying that as long as the net outcome is good, then the action was good as well?  So does that mean that if I am a malicious scientist who is trying to develop a super-virus, but I accidentally develop the cure for AIDS instead, that my action of attempting to develop a super-virus was good?  Intent is important; if I intend the outcome to be good and the outcome is good, then it is undoubtedly a good action.  However, if I intend the outcome to be good but the actual outcome is bad, that is where things go into the grey area.

Also, I'm not really sure what this has to do with the endgame ME1 decision discussion that has been going on...you would have to elaborate on which choice you thought was the "best net outcome" and why.

Modifié par Biotic Sage, 25 décembre 2011 - 07:00 .


#159
RowanCF

RowanCF
  • Members
  • 145 messages

Biotic Sage wrote...

RowanCF wrote...

 It all depends on the net outcome. In other words, was more harm done than good or the other way around. A simple example to illustrate this is I wouldn't kill two people to save one person, but I would kill one person to save two people.


That is unwaveringly utilitarian and the ultimate problem with utilitarianism.  You would actually murder someone to save two other people who were destined to die otherwise?  The person you are murdering, I'm assuming by your argument, is innocent by default...so why does he/she deserve that destiny rather than the two people you are saving?  Also, the consequentialism here is a problem.  So you are basically saying that as long as the net outcome is good, then the action was good as well?  So does that mean that if I am a malicious scientist who is trying to develop a super-virus, but I accidentally develop the cure for AIDS, instead, that my action of attempting to develop a super-virus was good?  Intent is important; if I intend the outcome to be good and the outcome is good, then it is undoubtedly a good action.  However, if I intend the outcome to be good but the actual outcome is bad, that is where things go into the grey area.

Also, I'm not really sure what this has to do with the endgame ME1 decision discussion that has been going on...you would have to elaborate on which choice you thought was the "best net outcome" and why.

You just made an extremely simple concept way more complicated than was necessary. First of all, we're all destined to die, that doesn't mean it isn't preferable to hold it off as long as possible, so I don't understand your logic about saving two people not being a good thing. Also, the whole concept of deserving or not deserving is a human emotional consruct that has no real meaning. The only thing that matters really is happiness or sadness. We want as many people as possible to be happy and as few as possible to be sad, period.

No, I NEVER said so long as the final outcome is good that the means in and of themselves are good. I said if the end OUTWEIGHS the evil of the means then overall it is justified. If I have to kill someone to save two people that doesn't mean I think it would be a good thing to kill that person if it wouldn't save two people. But it's a justified evil if it does save two people.

Again, of course not. If developing the super virus somehow was necessary to cure AIDS, then it might be a good decision if the cure for AIDS ended up doing more good than the super virus did evil. That doesn't mean developing the super virus in a situation that would not cure AIDS would be good, of course not, that's a completely different situation.

So no, utilitarianism is cleatrly correct, you just don't understand it for some reason even though it's a really simple concept.

Modifié par RowanCF, 25 décembre 2011 - 07:11 .


#160
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

RowanCF wrote...

Biotic Sage wrote...

RowanCF wrote...

 It all depends on the net outcome. In other words, was more harm done than good or the other way around. A simple example to illustrate this is I wouldn't kill two people to save one person, but I would kill one person to save two people.


That is unwaveringly utilitarian and the ultimate problem with utilitarianism.  You would actually murder someone to save two other people who were destined to die otherwise?  The person you are murdering, I'm assuming by your argument, is innocent by default...so why does he/she deserve that destiny rather than the two people you are saving?  Also, the consequentialism here is a problem.  So you are basically saying that as long as the net outcome is good, then the action was good as well?  So does that mean that if I am a malicious scientist who is trying to develop a super-virus, but I accidentally develop the cure for AIDS, instead, that my action of attempting to develop a super-virus was good?  Intent is important; if I intend the outcome to be good and the outcome is good, then it is undoubtedly a good action.  However, if I intend the outcome to be good but the actual outcome is bad, that is where things go into the grey area.

Also, I'm not really sure what this has to do with the endgame ME1 decision discussion that has been going on...you would have to elaborate on which choice you thought was the "best net outcome" and why.

You just made an extremely simple concept way more complicated than was necessary. First of all, we're all destined to die, that doesn't mean it isn't preferable to hold it off as long as possible, so I don't understand your logic about saving two people not being a good thing. Also, the whole concept of deserving or not deserving is a human emotional consruct that has no real meaning. The only thing that matters really is happiness or sadness. We want as many people as possible to be happy and as few as possible to be sad, period.

No, I NEVER said so long as the final outcome is good that the means in and of themselves are good. I said if the end OUTWEIGHS the evil of the means then overall it is justified. If I have to kill someone to save two people that doesn't mean I think it would be a good thing to kill that person if it wouldn't save two people. But it's a justified evil if it does save two people.

Again, of course not. If developing the super virus somehow was necessary to cure AIDS, then it might be a good decision if the cure for AIDS ended up doing more good than the super virus did evil. That doesn't mean developing the super virus in a situation that would not cure AIDS would be good, of course not, that's a completely different situation.

So no, utilitarianism is cleatrly correct, you just don't understand it for some reason even though it's a really simple concept.


Haha I'm sorry I have to laugh at your "you don't understand utilitarianism."  I clearly do, but apparently because I pointed out legitimate criticisms of the philosophy that have been addressed by many of the world's leading philosophers (no, I do not take credit for thinking of them; I merely took philosophy of ethics courses), your conclusion is that I do not understand.

And yes, utilitarianism is a form of consequentialism.  So you are implicitly asserting that the means themselves are good if the ends are good.  If I may be so bold, it is actually you who does not understand the concept if you think otherwise.

So yes, I admit, I did take an extremely simple concept (utilitarianism) and brought up complications that go along with attempting to apply said simple concept to our complex, real world universe.  I did not personally complicate the issue, I merely pointed out the complications.  You can continue to try to quantify lives and happiness in a nice neat little addition/subtraction based formula; I'm not saying that the basic idea of maximizing happiness is a bad one, I'm just saying that such black and white cold, hard mathematics have no place in a human-based ethical system.  We love, we form relationships, we act illogically even when we know it is illogical.  Is a mother supposed to kill her son in order to save 10 complete strangers?  Is she supposed to kill her son even if the act would save 20 complete strangers?  Can we ask anyone to go through life making decisions like that?  It is a fundamental flaw with utilitarianism, unless you yourself believe that is in fact acceptable.  Utilitarianism isn't "wrong," it's just too simple and incomplete; you need to build on it and not just have such a black and white outlook on things.

Modifié par Biotic Sage, 25 décembre 2011 - 07:39 .


#161
RowanCF

RowanCF
  • Members
  • 145 messages
I'm not implicitly asserting jack sh_t. You made that connection, but confused it with what I'm actually saying. No, I clearly stated the simple concept that an action is a good action if it causes overall good, even if the most direct consequences are not positive. What you completely fail to understand is that the direct consequences of an event are not the only consequences of that event. If I kill a person to save two people, you immediately jump to the conclusion that killing him was a bad thing because it caused him to die. No ****. But it also caused two people to be saved so overall the action should be judged positively. You are not looking at all the consequences of the action, only the immediate ones.

You faulting utilitarianism just because it may be difficult in certain situations to tell which choice will lead to the most happiness is idiotic. Just because we are unsure of which choice will cause the most happiness doesn't mean utilitarianism is flawed, it just means the philosophy may be difficult to apply in many situations. Making more complicated judgements that are not black and white is still part of utilitarianism because it is still trying to find the option that will cause the most happiness.

Modifié par RowanCF, 25 décembre 2011 - 07:45 .


#162
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

RowanCF wrote...

I'm not implicitly asserting jack sh_t. You made that connection, but confused it with what I'm actually saying. No, I clearly stated the simple concept that an action is a good action if it causes overall good, even if the most direct consequences are not positive. What you completely fail to understand is that the direct consequences of an event are not the only consequences of that event. If I kill a person to save two people, you immediately jump to the conclusion that killing him was a bad thing because it caused him to die. No ****. But it also caused two people to be saved so overall the action should be judged positively. You are not looking at all the consequences of the action, only the immediate ones.

You faulting utilitarianism just because it may be difficult in certain situations to tell which choice will lead to the most happiness is idiotic. Just because we are unsure of which choice will cause the most happiness doesn't mean utilitarianism is flawed, it just means the philosophy may be difficult to apply in many situations. Making more complicated judgements that are not black and white is still part of utilitarianism because it is still trying to find the option that will cause the most happiness.


I'm sorry dude.  Even if I am completely, 100% sure that murdering someone will maximize happiness, I am not going to murder that person.  You can go ahead and do so, because obviously that's what you believe to be right.  If you can't see the fundamental flaw in that then good luck to you.

You keep going over things that I already understand by the way.  Thanks for the recap, but I still don't believe in murdering an innocent person even if it maximizes the net happiness for two other people.

Modifié par Biotic Sage, 25 décembre 2011 - 07:50 .


#163
RowanCF

RowanCF
  • Members
  • 145 messages
Yeah, I can't see it dude because it isn't there. If you were 100 percent sure that would maximize happiness but you hold back because it seems bad, I'm assuming based on your instincts or intuition, then you are being illogical. You fail to explain otherwise.

It's not about maximizing the net happiness of the survivors, that's why we use the term net. It's about maximizing the net happiness of the people being saved AND being murdered togethor, which means the good is greater then the bad, meaning their is an overall net positive. If you disagree it's because you don't want happiness for people (or you just don't understand).

Modifié par RowanCF, 25 décembre 2011 - 07:56 .


#164
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

RowanCF wrote...

Yeah, I can't see it dude because it isn't there. If you were 100 percent sure that would maximize happiness but you hold back because it seems bad, I'm assuming based on your instincts or intuition, then you are being illogical. You fail to explain otherwise.


Alright so here's your dilemma: You know for a fact, with 100% certainty that killing your wife will save 3 other people who were going to die if you otherwise did nothing.  Killing your wife would be a bigger "net happiness" gain overall.  You gonna kill her?

#165
RowanCF

RowanCF
  • Members
  • 145 messages
Barring any possible selfishness on my part, based purely on logic, yes.

#166
Mr. Gogeta34

Mr. Gogeta34
  • Members
  • 4 033 messages

vvDRUCILLAvv wrote...

Mr. Gogeta34 wrote...

vvDRUCILLAvv wrote...

So what do you think, do the ends truly justify the means? Would you be willing to do the unthinkable if the outcome was favorable and if so why? Lets discuss this over tea and strumpets shall we.



In Mass Effect?  No... doing the "morally ideal" thing always yields the best results (no matter what the odds are).


In Mass Effect & real life to be clear.


No, in real life there's also Morally "Acceptable," Morally "Grey," and then shades of the Morally "Unacceptable."

Putting all life in the galaxy at risk to save a few top brass for instance... sure it's "morally ideal (don't leave anyone behind)," but also ethically wreckless... as that could've cost everyone everything (including the lives of the recently saved top brass)... and real life isn't always that lax with its time table.

Modifié par Mr. Gogeta34, 25 décembre 2011 - 08:01 .


#167
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

RowanCF wrote...

Barring any possible selfishness on my part, based purely on logic, yes.


And thus, the problem with utilitarianism.  I understand its logic, even though you keep saying that I don't (?).  I'm just saying that such a strict adherence to the philosophy is limited in scope.  The basic, underlying concept is fine, but it is an incomplete ethical code.

#168
RowanCF

RowanCF
  • Members
  • 145 messages
How is that a problem with it? We should strive to not be selfish, you're telling me otherwise. What..?

#169
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

RowanCF wrote...

How is that a problem with it? We should strive to not be selfish, you're telling me otherwise. What..?


What's the point of making other people happy if you don't value your own personal happiness at all?  That makes no logical sense.

#170
RowanCF

RowanCF
  • Members
  • 145 messages
Lol. Of course I value my own personal happiness. But I don't value it more than others, when I can help it. That's why I would kill my wife to save three people. There are more of them than me, so overall I have to go with them, even if I still value my own happiness.

#171
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

RowanCF wrote...

Lol. Of course I value my own personal happiness. But I don't value it more than others, when I can help it. That's why I would kill my wife to save three people. There are more of them than me, so overall I have to go with them, even if I still value my own happiness.


There's a place between selfishness and selflessness, you don't have to be just one or the other.  Quantifying the value of people's lives and counting happiness is just ridiculous in practical application.   We can aspire to it, but when we get down to real life situations the philosophy is rarely useful because there are always extenuating circumstances, exceptions, and considerations.  I would never expect a mother to save me over her own son.  I would never expect a husband to kill his wife to save me and 3 complete strangers.  I'm invoking the common sense rule here.

Modifié par Biotic Sage, 25 décembre 2011 - 08:08 .


#172
Mr. Gogeta34

Mr. Gogeta34
  • Members
  • 4 033 messages

Saphra Deden wrote...

Swampthing500 wrote...

Soverign will have control in a few moments because Saren will transfer control. It hasn't actually been done yet.


Or he'll have control in a few minutes because Saren already did what he needed to do and Sovereign is already in the system. After all, might that not be why he had to dock with the Citadel tower itself?

What if I'm right? What if you're wrong?

That's the problem here. You assume you're right and because of that you're prepared to take a horrible gamble that will result in the Reapers returning if you're wrong.

I'll play it safe and make damn sure the Reapers aren't getting through the relay before I worry about anything. As long as we stop the Reapers now we me may have months or years or centuries before the Reapers return. 

The point is, we'll have time.



Whoah whoah whoah... that point is not up for debate...  According to the game, Sovereign would regain control of the station in an urgent, but unspecified amount of time.

That's why (once you gain control of all systems), your squadmate instantly yells at you to open the station's arms so that you can take Sovereign down before then.  Vigil's software was only temporary and bought them an unspecified amount of time.  "It might give you a chance against Sovereign." -Vigil

Modifié par Mr. Gogeta34, 25 décembre 2011 - 08:10 .


#173
Swampthing500

Swampthing500
  • Members
  • 220 messages

Biotic Sage wrote...

Swampthing500 wrote...

Saphra Deden wrote...

Yes, and I'm telling you that by the time you reach Saren his job may already be done.


Then why possess Saren?


He possesses Saren after you make the decision.  Without metagaming, Shepard has no knowledge of this.  Therefore you cannot use it to justify the real-time decision.


The claim was that Sovereign may already have had control of the Citadel. I said that if Sovereign already had control transfered to him, why bother possessing Saren's body?

We were not discussing the decision to save the council in that context.

#174
Biotic Sage

Biotic Sage
  • Members
  • 2 842 messages

Swampthing500 wrote...

Biotic Sage wrote...

Swampthing500 wrote...

Saphra Deden wrote...

Yes, and I'm telling you that by the time you reach Saren his job may already be done.


Then why possess Saren?


He possesses Saren after you make the decision.  Without metagaming, Shepard has no knowledge of this.  Therefore you cannot use it to justify the real-time decision.


The claim was that Sovereign may already have had control of the Citadel. I said that if Sovereign already had control transfered to him, why bother possessing Saren's body?

We were not discussing the decision to save the council in that context.


Well in that case I'm not disagreeing with you.  Sovereign would not have bothered if indeed he already had control transferred to him.  However, isn't it a moot point unless you are using it to shed light on the decision to save the council?

#175
RowanCF

RowanCF
  • Members
  • 145 messages
I value my happiness exactly the same as someone else's. Obviously you can't always be sure of things, but you can aspire to, and it's fairly obvious in your example about my wife which situation is most likely to cause the most happiness. Any moral person therefore should kill their wife. In a situation that's less clear then we can't blame the person either way. That doesn't mean utilitarianism is to blame. But we should still strive to apply it as accurately as possible. Common sense IS utilitarianism.