Aller au contenu

Photo

Why Cerberus cannot be defended


  • Veuillez vous connecter pour répondre
1381 réponses à ce sujet

#251
Arkitekt

Arkitekt
  • Members
  • 2 360 messages

Lotion Soronnar wrote...

And the books establish TIM quite nicely. It's damn clear he's all about humanity,

If he wanted personal power, he could have got it in a million better ways.


He seems in a very precise path towards the ultimate power in the galaxy, somehing no other individual was so close to having, so I don't know if you are trolling or just distracted here.

Either way, conceding that the books "establish" TIM, we also see him in ME2 countering Shepard's snide remark whether they were doing good for humanity or Cerberus, he said "Shepard, Cerberus is Humanity!". And here we have your answer. TIM is all for humanity, but for TIM, humanity *is* Cerberus, and Cerberus is completely built to work with TIM set right in the center, with key personnel even being chosen for their total loyalty and almost godlike faith to the boss.

IOW, it's damned clear he's all about himself, he just thinks he *is* humanity himself!

Modifié par Arkitekt, 12 novembre 2011 - 11:12 .


#252
Dean_the_Young

Dean_the_Young
  • Members
  • 20 676 messages

Arkitekt wrote...

Exactly. And this failure to reach any precision makes utilitarianism an utopian dream for those more statistically and scientifically inclined. I am scientifically inclined, but I have dropped this methodology as useful a long time ago...

Strange. The scientifically inclined should be the first to realize that utilitarinism isn't a utopian philosophy. It has no 'end-game.' Nor does the philosophy need an absolute unit: it only needs relative amounts, which is an understanble relationship of things that can't be precisely measured. If you lack a thermometer, you can still tell between 'hot' and cold.'

The scientifically inclined should also recognize that imperfection always exist. The pinacle of scientific explanation, the model, is nearly always an over-simplification that fails in various cases.

Actually, you said the same thing in different wording when you said "no constant in dealing with people". That's the problem. And soft sciences, because they base their thinking in one or two sigma papers, with dubious methodologies, once you try to connect two or three papers to reach a conclusion, the probabilities degrade and you won't reach any significant result.

'Soft' sciences defy quanitization because they are too variable to be captured as we understand. That's the same sense for pretty much every science in most of their histories. Electricity, chemistry, and especially biology: well before we understood how they worked in detail, we could only understand them in general.

Did that make them any less meaningful or real sciences? 


Soft sciences exist, and so do teleological ethics, and public good and individual good, and so on. They exist either as human activities or ideas. I don't subscribe to the idea that something like an ethic or a moral is "valid" or not. In relation to what? To the universe? To god? To myself? For me, I have the ethics that I have built for the years that I've been living here, but they are neither valid nor invalid, since there's no exam that I must pass in order to get an official seal of approval.

If they work for you in an acceptable state for society, they are valid. All definitions require a certain amount of community consensus. The fact that you understand if I count '8, 9, 10, 11' as being a bse 10 number system just shows that. I could count '1, 10,', or '1, 2, 3, 10,' and be completely correct by another definition. Definitions are a matter of culture as much as non-cultural objectivity.

But, since we accept that the un-definable can still be maximized, we can agree that a philosophy to maximizing it works.


Cerberus can be defended if you think that they can pull off what they set out to do and if you agree with the end result.

That's it. No teleological or metaphysical shenanigans are required.

Oddly, that's a distinctly teleological shinanigan of justification, and undermines what you wrote right after.

Soft sciences exist, they work with "measured units" and they are mostly bunk. Not to say that I would do better: the problem is the subject, which is far too complex. Many bright people have gathered some intuitive insights and correlated them with some interesting numbers. Thing is, with its low demand upon the quality of its accuracy, a lot of rubbish and shenanigans go through. This is why one cannot base their morals upon these sciences, although one can base their morals on some insights that are sufficiently simple that these sciences seem to hint to. Because that's the best we have.

If you can't do better, why are they 'bunk?' Because they lack numerical simplicity? That there can be such a thing as intuition? Or that relative amounts is simply too imprecise for your liking? That science is still basic and has plenty of charlatans doesn't mean all basic science is nothing but charlatans. You can't simply disregard any soft science because it is soft.

This seems less about a philosophy and more about you'r displeasure that something can be maximized without quantifying it.

#253
Josh123914

Josh123914
  • Members
  • 245 messages

GodWood wrote...

sponge56 wrote...

GodWood wrote...
Again, not really. Racism against Reapers seems pretty justified. 

Umm no it doesn't.  Hating them because they of their race (Organic/ machine hybrid) would be a racist view.  Hating them because of the fact that they want to kill everything is different

No, hating Reapers is racist, regardless of the reasoning behind it.

You filthy racist.

Oh yeah, Well by your logic everybody could be a racist. I'm a racist, you're a racist, what about cats are they racist?

No, but if you hold a sign up next to one it could LOOK racist- BOO, DEATH TO THE RACIST!

#254
Arkitekt

Arkitekt
  • Members
  • 2 360 messages

Dean_the_Young wrote...

Arkitekt wrote...

Exactly. And this failure to reach any precision makes utilitarianism an utopian dream for those more statistically and scientifically inclined. I am scientifically inclined, but I have dropped this methodology as useful a long time ago...

Strange. The scientifically inclined should be the first to realize that utilitarinism isn't a utopian philosophy. It has no 'end-game.' Nor does the philosophy need an absolute unit: it only needs relative amounts, which is an understanble relationship of things that can't be precisely measured. If you lack a thermometer, you can still tell between 'hot' and cold.'

The scientifically inclined should also recognize that imperfection always exist. The pinacle of scientific explanation, the model, is nearly always an over-simplification that fails in various cases.


Sure, thing is that in these cases, "relative" is an euphemism. One sigma is 68%. Two sigma 95%. Using dubious methodologies and all biased to get the golden p ratio so one gets his own paper published in the journals is a recipe for disaster in the academia. Even on lesser "soft" sciences, this has become a systemic problem (namely on medicine, for instance, but there are others). When it comes to those where "soft" is also an euphemism (like social or political sciences), it would probably be better if they left out the word "science" out completely, in order not to fool a bystander that still has some standards on the quality that a "science" should produce.

'Soft' sciences defy quanitization because they are too variable to be captured as we understand. That's the same sense for pretty much every science in most of their histories. Electricity, chemistry, and especially biology: well before we understood how they worked in detail, we could only understand them in general.

Did that make them any less meaningful or real sciences?


Are you trying to make a teleological argument on the sciences themselves? Sure, every knowledge is of the same "type", but when you have really bad results in one type of science, I am not saying you shouldn't try harder. Yeah, you should try harder. Perhaps that will increase the quality of the end product. What I am saying is that just as you shouldn't really take seriously the scientific opinions on chemistry in Newton's time, you also shouldn't take seriously current "findings" on these soft sciences.

If they work for you in an acceptable state for society, they are valid.


Exactly. If you define the ultimate reference point as *me*, then we can work out something. Notice that this reference point is never established per se. It is not "self-evident".

All definitions require a certain amount of community consensus. The fact that you understand if I count '8, 9, 10, 11' as being a bse 10 number system just shows that. I could count '1, 10,', or '1, 2, 3, 10,' and be completely correct by another definition. Definitions are a matter of culture as much as non-cultural objectivity.


2 plus 2 is 4.

But, since we accept that the un-definable can still be maximized, we can agree that a philosophy to maximizing it works.


Only if we agree that the invisible unicorn may have a point to show us, we are just incapable of seeing it.

If we cannot define something, then that something is irrelevant in our discussion. Either we try to define it, or if we don't, then we don't. We cannot maximize something we don't even define other than by sheer accident or luck.

Oddly, that's a distinctly teleological shinanigan of justification, and undermines what you wrote right after.


Come on, don't troll my arse.

If you can't do better, why are they 'bunk?' Because they lack numerical simplicity? That there can be such a thing as intuition? Or that relative amounts is simply too imprecise for your liking? That science is still basic and has plenty of charlatans doesn't mean all basic science is nothing but charlatans. You can't simply disregard any soft science because it is soft.


I don't "dismiss" any of it, since it's all we got. I just don't take it too seriously, just as I wouldn't take Newton's grasp of Chemistry or Kepler's astrological abilities without a good grain of salt. 

This seems less about a philosophy and more about you'r displeasure that something can be maximized without quantifying it.


How can something be "maximized" (hint, a numerical term) without "quantification"? You sir are talking nonsense. I don't mind it, I love Monty Python, and so I love this ****, but you should be aware of the shenanigans that sometimes even our own brains produce and we think they make sense, when they are even contradictory in so little words that are used.

What happens usually is that a gut feeling of "goodness" is reached at some conclusion or some kind of justice, or system of sharing stuff, etc., and some people will say that the good has been "maximized". In this case however, the only thing that was "maximized" was the number of people not-unhappy about a certain situation. Well, that seems quantitative to me. And also quite unscalable.

#255
GodWood

GodWood
  • Members
  • 7 954 messages

Josh123914 wrote...
Oh yeah, Well by your logic everybody could be a racist.

Why yes. Unless of course they choose not to hate all members of a race.

It's not that hard a concept to grasp.

#256
RiouHotaru

RiouHotaru
  • Members
  • 4 059 messages
It's difficult to argue that Cerberus has anything resembling a good idea for the reasons Cheez stated. It's not Cerberus' intentions that cause the problems so much as the methods. SURELY there has to be less extreme solutions than kidnapping children to force them to become biotics through what amounts to torture.

Or doing experiments on both Husks (the infamous and much maligned UNC: Colony of the Dead mission in ME1), Thorian Creepers (The Exogeni side-missions), and Rachni.

Or killing Alliance Admirals for wanting the truth about the men who died due to Cerberus experimenting with Thresher Maws.

Or the various other Thresher Maw related experiments Cerberus is responsible for (Akuze, Toombs)

Or Project Overlord, easily their most heinous experiment yet.

Or the experiments they did as detailed in the CDN about kidnapping Asari and humans and injecting them with drugs to try and BLOCK biotics.

It's incredibly difficult if not nigh impossible to get behind a group like this or handwave their crimes when they're so ghastly. If it was JUST kidnapping or JUST controlled experimentation, it might be easier to be lenient.

But when they go to all these extremes and then claim "But we're doing it for humanity so it's all good!" that it starts becoming ridiculous.

And you can't chalk it up to "Bioware's writing/characterization sucks" either, because minimum half the examples I used? Were in ME1.

#257
Dean_the_Young

Dean_the_Young
  • Members
  • 20 676 messages
[quote]Arkitekt wrote...

[quote]Dean_the_Young wrote...

[quote]Arkitekt wrote...

Sure, thing is that in these cases, "relative" is an euphemism. One sigma is 68%. Two sigma 95%. Using dubious methodologies and all biased to get the golden p ratio so one gets his own paper published in the journals is a recipe for disaster in the academia. Even on lesser "soft" sciences, this has become a systemic problem (namely on medicine, for instance, but there are others). When it comes to those where "soft" is also an euphemism (like social or political sciences), it would probably be better if they left out the word "science" out completely, in order not to fool a bystander that still has some standards on the quality that a "science" should produce.[/quote]Now you're only fooling yourself. Hard sciences are much the same in many respects.

[quote]

Are you trying to make a teleological argument on the sciences themselves?[/quote]Nope. Only pointing out that pretty much every science, even math, had long periods in which it fit your 'not real science' categorization.

[quote]Sure, every knowledge is of the same "type", but when you have really bad results in one type of science, I am not saying you shouldn't try harder. Yeah, you should try harder. Perhaps that will increase the quality of the end product. What I am saying is that just as you shouldn't really take seriously the scientific opinions on chemistry in Newton's time, you also shouldn't take seriously current "findings" on these soft sciences.[/quote]But if we hadn't taken the science of Newton's time seriously, we never would have developed the science of today's time. And if we don't take the science of today seriously, we won't build on tomorrow.

Science is a constant attempt to grope in the dark to understand the questions we haven't learned to ask yet. That we've gotten better makes our over-simplifications and assumptions no less imprecise. When it gets down to it, we still don't understand something as prevalent as gravity... and we've gone to the moon!

[quote]

2 plus 2 is 4.[/quote]It's 100.
(Base 2)

Or it's 10.
(Base 4)


[quote]
Only if we agree that the invisible unicorn may have a point to show us, we are just incapable of seeing it.[/quote]Unicorns don't exist. Happiness does. That we have no unit for it doesn't change that it exists.

[quote]
If we cannot define something, then that something is irrelevant in our discussion. [/quote]Define pi.

If that's too hard, define the speed of light.


[quote]
Come on, don't troll my arse.[/quote]You just made the most basic results-oriented justification of all time: 'If you agree with the results, it's justified.'


[quote]
I don't "dismiss" any of it, since it's all we got. I just don't take it too seriously, just as I wouldn't take Newton's grasp of Chemistry or Kepler's astrological abilities without a good grain of salt. [/quote]Fortuntely, we listen to Newton about physics instead... even though he too was horribly misguided about the nitty gritty.

[quote]
How can something be "maximized" (hint, a numerical term) without "quantification"? You sir are talking nonsense. I don't mind it, I love Monty Python, and so I love this ****, but you should be aware of the shenanigans that sometimes even our own brains produce and we think they make sense, when they are even contradictory in so little words that are used.[/quote]Ever build sand castles? Split a cake? How do you know which one is bigger, when you can't count it at a precise level?

Relative analysis without precise quantification is fundamental math. The most basic math of all isn't 1 + 1 = 10 (base 2). It's 10 > 1.


[quote]
What happens usually is that a gut feeling of "goodness" is reached at some conclusion or some kind of justice, or system of sharing stuff, etc., and some people will say that the good has been "maximized". In this case however, the only thing that was "maximized" was the number of people not-unhappy about a certain situation. Well, that seems quantitative to me. And also quite unscalable.
[/quote]How not unscalable?

We recognize that unhappy people are not happy (zero), while not-unhappy people can be (a partial sum: call it a ratio between 1 and 0). The more people not-unhappy you have at a certain ratio, the greater happiness you have.




Time's up for me and internet today. Laters.

Modifié par Dean_the_Young, 12 novembre 2011 - 11:53 .


#258
Arkitekt

Arkitekt
  • Members
  • 2 360 messages
[quote]Dean_the_Young wrote...

Now you're only fooling yourself. Hard sciences are much the same in many respects.[/quote]

They have the same problems. But not at the same rate. Not by a long shot. And that rate makes all the difference in the world.

[quote]But if we hadn't taken the science of Newton's time seriously, we never would have developed the science of today's time. And if we don't take the science of today seriously, we won't build on tomorrow.[/quote]

We take "science" seriously, just not its current results. You are not understanding the issues here. The methodology is known for ages, and when someone tries to say that their field is not "actually very numerical", or that they don't need to predict stuff correctly, etc., we know they are deviating from the only path that has ever produced quality work.

[quote][quote]2 plus 2 is 4.[/quote]It's 100.
(Base 2)

Or it's 10.
(Base 4)[/quote]

I was trying to say that your speech on the relativity of truth is basic 101 to me. Now you try to teach me bases. lol


[quote]Unicorns don't exist. Happiness does. That we have no unit for it doesn't change that it exists.[/quote]

Who says we don't? We can manipulate it quite well. Sometimes scarily:

http://www.popsci.com/node/4089

[quote]Define pi[/quote]

The ratio between the perimeter of a circle and its diameter.

Didn't you learn this? 

[quote]If that's too hard, define the speed of light.[/quote]

The speed of light? You mean the measurement of how fast a photon travels, or how much space it travels as measured in rulers per unit of time as measured in clocks?

What is wrong with you?

[quote]You just made the most basic results-oriented justification of all time: 'If you agree with the results, it's justified.'[/quote]

Yeah, when you cut my justification in half it sounds silly. Because that was only half of what I said. But the rest didn't fit your nice picture of chasing my arse as a teleological hypocrite, now did it? So you conveniently forgot it.

[quote]Fortuntely, we listen to Newton about physics instead... even though he too was horribly misguided about the nitty gritty.[/quote]

Nice detour. I don't believe the point went above your head, you just pretend it did, by writing a completely irrelevant sentence there! ;)

[quote]Ever build sand castles? Split a cake? How do you know which one is bigger, when you can't count it at a precise level?[/quote]

Are you saying that sand castles aren't quantifiable? And easily so? I can use my hands. I can guesstimate their size with my eyes, sure. But senses fail us all the time. That's the point.

[quote]Relative analysis without precise quantification is fundamental math. The most basic math of all isn't 1 + 1 = 10 (base 2). It's 10 > 1.[/quote]

Please try to write sensible non-poetic stuff when we are dealing with rigorous ****, will ya? Can you do such a thing? Are you able to? You are almost sounding like Chopra.

(what? You want a rebuttal to that? OK. Who is to define what is "the most basic"? You? What does it even mean? How can we measure it? Is it even measurable? None of these things actually, it's just woo-talk to sound wooish)

[quote][quote]
What happens usually is that a gut feeling of "goodness" is reached at some conclusion or some kind of justice, or system of sharing stuff, etc., and some people will say that the good has been "maximized". In this case however, the only thing that was "maximized" was the number of people not-unhappy about a certain situation. Well, that seems quantitative to me. And also quite unscalable.
[/quote]How not unscalable?

We recognize that unhappy people are not happy (zero), while not-unhappy people can be (a partial sum: call it a ratio between 1 and 0). The more people not-unhappy you have at a certain ratio, the greater happiness you have.[/quote]

Yeah, but since you and me have reached the obviosity that human beings are frakkin complex and these things are always changing, you won't be able to do much relevant science here. That's what I mean by "unscalable", which was meant as an entrepeneurial term.

#259
SandTrout

SandTrout
  • Members
  • 4 171 messages

Dean_the_Young wrote...

didymos1120 wrote...

SandTrout wrote...

 In ME, the Xeno-national oragnizations operate on the principal that assimilation is intrinsically not possible between the different species. This is a true premise because interbreeding is not possible.


It's an unjustified premise.  Assimilation is not the same thing as interbreeding, nor have you demonstrated that the latter is required for cultural assimilation between any two given species.

Case in point, the Drell-Hanar relationship.

Fair enough point, however, even in the Drell-Hanar relationship, there is one species that is dominant, and another that is subservient. The Drell-Hanar relationship is a de-facto caste system, even if it has not been imposed by force.

Also, Interbreeding is necessary for true cultural assimilation because without the ability to create a society in which it becomes feasible for the two populations to be irregularly intermixed, the logical result is that one group will cluster with its own for the largely benign purpose of reproduction.

Even if a member Species A moves into an area that is almost completely Species B in order to run a buisness or some such thing, Species A will need to eventually return to a Species A populated area in order have access to a reasonably sized pool of potential mates. If the said member of Species A does not return to its "homeland", it runs the very real risk of dying childless, resulting in the death of any assimilated cultural values.

On the off chance of finding a compatable mate within the primarily Species B area, the chances of their offspring finding a viable mate is still drastically reduced, arguably more so than the first-generation immigrant because the likelihood of any other member of Species A that they meet being closely related is increased.

#260
Yezdigerd

Yezdigerd
  • Members
  • 585 messages

Dean_the_Young wrote...

Yezdigerd wrote...

I'm kinda curious, those people who see the galaxy as dog eat dog place, how do you explain that the Council didn't simply subjugate earth? the Turians alone could have done it, but the council mediated, and not only that but allow humanity to colonize other starsystem.

Because the Council doesn't want the Turian Heirarchy running amock, and the trouble the Turians caused the Humans would be rebounded by other species (and the Terminus) who don't like it.

As for the lack of equality of the council, is that really strange and unfair?
Contribution, size and comittment naturally influence power in and organization. Doubt many find it unfair that the US has more of a say in the UN then Andorra.

But the UN doesn't go 'you're black, you don't have a vote, you're white, so you do.'

I think everyone can agree that even when heirarchies exist, racist caste systems do not need to.


The common solution to the balance of power in our history has been to divide the spoils into spheres of interests.
Or possible take the opportunity to make war on your rival once they engaged the new power. Not take a star trek prime directive approach.
Earth is one solarsystem among billions with no ties to any galactic power, why should anyone bother to care?
The Turians took hundred of worlds from the Krogan empire and the Council was happy with that.
Fear of Turian expansionism could be a reason sure, but I didn't note any signs of concern over this in the game. No doubt the Salarians have some gas, virus, mindcontrol device, Yagh army in their contingiences if the Turians get out of line but the galactic community as a whole seems satsified with the fair and honest Turian galaxy police.Other minor species might lack political acknowledgement but they seem to be protected by Council law still.
I just mean that the council actually exists in such a benevolent form seems to be great blessing for humanity.

As for racism/specisism, If notable differences actually exists I don't find it strange. What if the average Salarian had a IQ of 400? and we basically seemed to have the thought processes of dogs to them. Would they be prejudiced to try to keep human away from decision making functions? Or if the Asari are strong empaths and knew more about your thought and needs then we did. Would they be wrong to act upon it?

#261
Xilizhra

Xilizhra
  • Members
  • 30 873 messages

Dean_the_Young wrote...

Xilizhra wrote...
Excellent, so consequentialism is Paragon.

Nope*.


*Not without metagaming, that is. All philosophies can only work on what they know at the time going forward, not retroactive justification fallacies.

I'm not a deontologist.

Paragon has a very strong deontological slant.

I'm not using retroactive justification. I have reasons for everything I do before doing it.

And I, personally, am not a deontologist.

#262
Zatwu

Zatwu
  • Members
  • 138 messages

And now you're repeating your first argument and ignoring your second.

Actually, it's closer to abandoning your second argument because now you're even conceding that Cerberus has been changed from all prior depictions... and that doing so is bad writing.


Ha, no. I've done nothing of the sort, don't put words in my mouth. BioWare says they had the trilogy basically planned out and I believe them. Cerberus was always gonna go bad and their depiction has not changed. They've always been crazy.

I'm saying you can think that's bad writing if you want, I don't, but these things are subjective. But you can't argue that right now Cerberus is basically indefensible.

Modifié par Zatwu, 13 novembre 2011 - 01:35 .


#263
Aumata

Aumata
  • Members
  • 417 messages
Because bioware can't write a grey area and competent organization. Other than that they suffer from the bad guy syndrome so they can't be defended at all despite the fact that the council is just as bad as they are, just competent. As some have pointed out the council exist for the three higher species turians, salarians, and asari. The only reason why the manage to put up with humans is that their number in the military out of total population is extremely low which makes them a sleeping giant.

#264
Zatwu

Zatwu
  • Members
  • 138 messages
I disagree, I think there are lots of grey areas that are very well done, like what to do with the Geth or the Collector base.

I also disagree that the Council is just as bad as Cerberus. The Council isn't currently trying to genocide anyone. The Councilors that did genocide the Rachni and the Krogan are long dead. And the reason that Humanity is on the Council is because of their accomplishments against Sovereign.

#265
Aumata

Aumata
  • Members
  • 417 messages
I said both, Cerberus is suppose to be grey but they got the bad guy syndrome, and cerberus hasn't committed genocide at all either. Bioware writer is really hammering that cerberus is bad, and for me it is getting in the way to what TIM could have been and the same for Cerberus. Then again ME2 main story was horrible anyway, but I find it hard to defend a group that is shown as incompetent despite the evidence that is shown otherwise. It really irks me but all Bioware shows is their screw-ups and Shepard having to clean up. I am hoping that ME3 they are written better than ME2 and actually show them succeeding. Till then I really can't defend them despite the fact that my main Shepard gave them the base.

#266
sponge56

sponge56
  • Members
  • 481 messages

Aumata wrote...

Because bioware can't write a grey area and competent organization. .


The rachni queen's survival/death and the fate of the geth were preety morally grey.  So was the collector base in my opinion.  I was always under the impression that Bioware excelled in providing morally grey moral choices

#267
Guest_Cthulhu42_*

Guest_Cthulhu42_*
  • Guests

sponge56 wrote...

Aumata wrote...

Because bioware can't write a grey area and competent organization. .


The rachni queen's survival/death and the fate of the geth were preety morally grey.  So was the collector base in my opinion.  I was always under the impression that Bioware excelled in providing morally grey moral choices

What may seem to be gray to one person may be black and white to another.

#268
sponge56

sponge56
  • Members
  • 481 messages

Cthulhu42 wrote...

What may seem to be gray to one person may be black and white to another.


True, but then both grey/black and white become meaningless in a discussion as nobody is speaking the same linguistic language

#269
didymos1120

didymos1120
  • Members
  • 14 580 messages

sponge56 wrote...

Cthulhu42 wrote...

What may seem to be gray to one person may be black and white to another.


True, but then both grey/black and white become meaningless in a discussion as nobody is speaking the same linguistic language


Um, no, what C42's saying is that some people perceive a given moral issue as simple and straightforward, while others look at the same issue and see no clear answers. It's nothing to do with whether or not they agree on the definition of "gray" in the moral sense.

Modifié par didymos1120, 13 novembre 2011 - 03:11 .


#270
XEternalXDreamsX

XEternalXDreamsX
  • Members
  • 499 messages
I was wondering if anyone spoke on this.. The (renegade) options to pretty much agree with T.I.M. during ME2 doesn't improve relations between Shep and Cerberus in ME3, I say.. why not let Rene-Sheps work with Cerberus to find an alternate way to defeat the Reapers in ME3? Off-the-wall question.. sorry!

#271
Schneidend

Schneidend
  • Members
  • 5 768 messages

sponge56 wrote...

There is no real plausibile explanation as to how some people on these forums can constantly defend Cerberus.

1)  Cerberus is an inherently RACIST/SPECIEST organisation.


Do you honestly think the other races don't work toward giving themselves the edge and keeping it? STG, asari commandoes, whatever turian special ops is, they all serve the same function as Cerberus.

2) Humanity doesn't really need to ascend.  A war with the other alien species is highly unlikely or even impossible due to the size of Humanity's fleet.  While not as large as the Turians, it poses a significant enough threat so that war would be very costly for both sides.  Also, Humanity is constantly given more and more powers and freedom by the Council in a very quick space of time, so why do we need such an organisation?


We need such an organization because all nations have needed such organizations since the dawn of civilization, and because the other races won't be disbanding their spec/black ops groups any time soon, either.

3) Many pro-cerberus supporters go on about how Cerberus represents and acts for humanity.  It does nothing of the sort.  Cerberus acts on the behalf of the Illusive Man, an individual figure who calls all the shots in his organisation.  The Alliance is run by elected officials who are voted into their respective positions, who elected the Illusive Man?  His funders trust he will act with humanity's best interests, but I doubt they know or have a say on a ything that he actually does.


The Illusive Man doesn't need to be elected. He's the leader of a black ops cabal, not a senator.

The investors have a great deal of say in what Cerberus does. They control the Cerberus wallet. Without their contributions, Cerberus would have to rely solely on its front companies for funding, which are expendable assets meant to protect the organization from scrutiny moreso than to necessarily provide income.

#272
Phaedon

Phaedon
  • Members
  • 8 617 messages
[quote]Dean_the_Young wrote...

On the other hand, if you quoted or made clear who you were talking to or what exactly you were referencing, you'd make far more sense.

As it is, you come off as someone who, tired, is jumping from A to D without passing through C.[/quote]
That's a very illogical assumption when I refer to something very specific which has never been mentioned in your posts. Failing Ctrl+F, there's always Google and Wikipedia.


[quote]In units? No. In relative terms? Sure.[/quote]
-Utils=Units, mate.
-Relative terms? How. Utilitarianism is about achieving total utility towards all members of humanity. It's anthropocentric anyway, therefore flawed, and bases itself on a system of utility. If you put any kind of relativity into this, all moral hell breaks loose.

If I do Action A, you will bring over your morality calculator and measure it's consequences. While realizing that an absolute sum of utility can NEVER be measured, you'll just try to "eyeball it". So, you'll realize that my action has a negative short-term effect for this generation of humans, but a positive long-term effect for the next. Because you have accepted that total utilitarianism is a pretty terrible and utopic idea, you'll say that "I am almost sure that the long-term effects outweigh the short-term ones, so you are a good person, Phaedon, even after you murdered that guy in cold blood." Here's the catch: What about the third generation of humans that will have to face the consequences of my action? Maybe it will be negative for them. And what about the fourth, fifth and sixth one? Maybe it ends up being positive for them.

Thing is, if you claim that there is a measurable greater good:

a) You should go ahead and solve all moral dillemas by yourself,

B) Realize that in fact there isn't one, because what you are calculating is essentially part of the greater good. You are literally returning to the general idea of consequentialism or even pragmatic ethics.



[quote]But we can predict the consequences closer, and more relevant, to us. We can also hold people to reasonable standards (which can be held), as opposed to unreasonable standards (which can not).

There's nothing in consequence-based morality that requires you to have perfect knowledge of the entire world.[/quote]
That's not utilitarianist and consequentialist at all, that's the darn problem.

I don't care about the effects that my action will have to you and today's humanity, it can affect all history from now on. 

"There's nothing in consequence-based morality that requires you to have perfect knowledge of the entire world." Yes, there is. I ask for you to tell me if the action is good or bad for the greater good, and you are telling me "Hey, it's probably a good thing for today's or tomorrow's society".  If you want to adapt to reasonable standards, you'll stop telling me that what I do is good or bad based on its consequences. Why? Because you tell me that "the consequences are probably bad", when I can say, no, they are not. Prove me wrong. Or you know, just prove yourself right.

Our actions change history permanently and if you are going to calculate their short term effects (despite the inherrent flaws of doing so), you are being a hypocrite, since you don't care about the consequences, overall, at all.



[quote]That doesn't show that considering yourself as part of a group you belong to would be wrong. Imperfect, yes, but imperfections are a fact.[/quote]
This is a debate, not a poetry contest.

[quote]A large part of the 'hard' sciences is modeling, and modeling always accepts imprecisions and errors inherent with the simplifications. Take the number 'pi', for example. No one has ever defined what it is... nor do we care. The concept of pi is useful, practical, and real. With it we can do what mathmatecians for thousands of years could not.


(Same with the number zero. Wonderful example of a non-existent reality.)[/quote]
Here's the problem. No mathematician is trying to found an exact result, and every physicist accepts that almost everything in this universe is 100% relative.

You say that you support a specific philosophy, when in fact you are just debating against it. You are openly admitting that you are not trying to maximize overall happiness, just local one. By doing so, you are accepting that happiness is actually that the happiness of one person can be measured, even relatively throughout time, and that's just another major flaw. What you actually support is not even a thing. It's more like a logical flaw, approaching pragmatic ethics more than anything else.



[quote]Except they don't have to 'knowingly' be doing it. We also punish people who enable harmful actions without knowing, when we believe it was well within their power to realize.[/quote] 
That's exactly what "knowing" and "knowingly" means.

[quote]'Reckless endangerment', or the concept of regulation. Regulation[/quote]
Okay. Do you believe that anyone who just got issued a DUI isn't guilty of it? As long as they were in the position to think of what they could do, they are perfectly guilty for endangering the lives of others. Driving under the influence is not too different than aiming a gun at someone. Hyperbole? Yes. Invalid? No.

[quote]
But you know, manslaughter is the same thing as homicide, right? The consequences are the same, no?[/quote]No, actually. A number of countries differentiate murder by degrees. First-degree murder, second-degree murder, third-degree murder. Then there's negligent manslaughter versus constructive manslaughter,


[quote]
And underage criminals are treated just like adult ones.[/quote]No they aren't. Not in many, many countries at least.
[quote]
Mentally ill criminal are treated just like sane ones.[/quote]No they aren't. This is why we have insanity pleas, or the concept of Mens Rea. The concept of hate crimes also comes into relevance with the motivation.[/quote]
I can't help but ask, how come did you miss the evident sarcasm? And why are you defending my position?

Manslaughter-Homicide
Same effects, different intents, different punishment.

Crime while sane- Crime under influence of pretty much anything - Crime while insane
Same effects, different or not fully realized intents, different punishment.

Underage Crime - Adult Crime
Same effects, Not fully realized intents in one of the cases, different punishment.

The τέλος is definitely the same. The way the criminal is treated? Hmm. Something to think about.


[quote]But no court in can investigate any crime to an infinite degree, which was the extreme you were approaching. Nor can any court investigate all crimes.[/quote]
No. A court MUST investigate all crimes. That's the point of a trial.
Infinite degree? Virtue ethics don't demand full execution, and realize that every human is flawed to a degree. Courts try to be objective but still realize that they are subjective to some extent. 

[quote]Courts that do things such as class-action lawsuits: a number of crimes are collectively gathered for an investigation. Then we have justice systems which can not handle all appeals for investigation, and so resort to plea bargains in lieu of intense investigations. We have statues of limitations and jurisidction limitations as well.[/quote]
Have you ever seen someone convicted for all charges unless every single one is proved? I haven't. Also, read the above.

[quote]The results are what drive thei impulses, so how could you agree with one and deny it's consequence.[/quote]
How can I? Oh, it's very simple. When someone throws a basketball at me accidentally, I don't start chasing them around. Why? Because they "didn't mean it". If you treat this action the same way as with someone throwing a basketball in your face with the intent to hurt you, then you must have big issues with forgiving people.



[quote]Says who? Besides you, I mean.[/quote]
Utilitarianism[/b] is an ethical theory holding that the proper course of action is the one that maximizes the overall "happiness", by whatever means necessary.

Greatest happiness princible by Bentham: http://www.econlib.o...am/bnthPML.html


Happiness is not only something that can't be measured throughout time, but it is something that can't be measured in general, end of story.

[quote]A system which appeals to the best objectivity possible is distinctly different from one which promises the impossible. [/quote]
Exactly.

[quote]To make the perfect the enemy of good is a ridiculous exageration.[/quote]
Except that that is not what I am saying. At all. It's not good at all if you don't give a cr*p for whatever happens after a few years. That's downright terrible and insensitive. 


[quote]I might as well claim that, since you are a morally imperfect being, your moral arguments are thus disproven and invalid.[/quote]
Yeah, you do that. Follow a standard logical pattern in order to do so, rather than stuffing impressive statements into a couple of sentences, while you are at it. Humans aren't perfect beings, that doesn't mean that the ideas that they hold but not always execute are flawed or invalid. 


[quote]The nature and emphasis of subjectivity is shifted, however. In general it is shifted to what is more commonly agreed upon: that something hurts someone, or something helps someone.[/quote]Goodie. Now we can finally agree that what you have been defending all along was pragmatic normative ethics, rather than consequentialism.


[quote]And yet I'm not applying everywhere. I'm applying it here, where it belongs.[/quote]
No, you are not. An ethic code is supposed to be followed as much as possible in all of your actions.

[quote]It would be the economists who decided whether a war ends a recession or not. Not the moralists. You're appealing to the wrong authority... and making a poor showing at it, since your implication that if the war did end the recession, utilitarians would have to reward the instigators ignores the non-monetary costs of the war.[/quote]
Economists would do little more than agree or disagree whether the war was overall good for the economy or not.
Psychologists would argue to how much the economy affects ones well-being, and philosophers would jump there and claim that the "monetary and psychological costs of the war are less important than the monetary and psychological gains of the war".

Modifié par Phaedon, 13 novembre 2011 - 10:57 .


#273
Phaedon

Phaedon
  • Members
  • 8 617 messages

Arkitekt wrote...

And I agree that Paragon is extremely "Deontological" ... although every ethics are "consequentialists", since they all try to build the best possible world... (obvious innit?)

That's debatable. It's a very bad idea to attach moralities to roleplayable characters. My Full Paragon Shepard is not a deontologist.

Arkitekt wrote...
Because you are building a giant strawman the size of Jupiter.

"Hey, let's imagine that this philosophy that I don't like reaches a really barbaric conclusion.

Therefore, it's a stupid philosophy! QED"

Except that it didn't, it hasn't and never will.

So you are arguing strawmans. Learn to argue and don't pretend that the arguments you are fighting are caricatures of some lame idea that is going inside your own mind. Because they aren't.


Arkitekt wrote...

And I agree that Paragon is extremely "Deontological" ... although every ethics are "consequentialists", since they all try to build the best possible world... (obvious innit?)


Yeah, it seems to me that the problem here is that you misunderstood the definition of consequentialism, and got offended at any attack towards it.
Do you know what consequentialism actually means?

Here's what Wikipedia has to say:

Consequentialism[/b] is the class of normative ethical theories holding that the consequences of one's conduct are the ultimate basis for any judgment about the rightness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission) is one that will produce a good outcome, or consequence.

So clearly, if the outcome of one action, no matter how terrible it is, can have an overall good effect, the overall action is a-okay. The people who you are defending are either not consequentialists, or if you want to consider yourself a somewhat decent person, you wouldn't be defending them and their philosophy as a whole.

And no, not all philosophies are consequentialist. The term that you are looking for is teleological, and even then you are wrong. Yes, everyone wants to build a better world (τέλος) but not everyone will do whatever it takes to get there. There is after all, a thing such as self-imposed restrictions. Some people actually care about getting to the τέλος,less, than about getting there in the right way.

#274
Josh123914

Josh123914
  • Members
  • 245 messages

GodWood wrote...

Josh123914 wrote...
Oh yeah, Well by your logic everybody could be a racist.

Why yes. Unless of course they choose not to hate all members of a race.

It's not that hard a concept to grasp.

I wasn't serious or anything, I was just quoting a skit done by Charlie Brooker<_<

#275
sponge56

sponge56
  • Members
  • 481 messages

didymos1120 wrote...


Um, no, what C42's saying is that some people perceive a given moral issue as simple and straightforward, while others look at the same issue and see no clear answers. It's nothing to do with whether or not they agree on the definition of "gray" in the moral sense.


Yep your right, my mistake, didn't read it properly