Aller au contenu

Photo

Moral Dilemmas: Yea or Nay?


  • Veuillez vous connecter pour répondre
657 réponses à ce sujet

#501
Laughing_Man

Laughing_Man
  • Members
  • 3 664 messages
*snip*

 

I didn't know Nick Bostrom, but from what I found on Youtube he is just saying what others are saying about the topic of AI.

 

Stephen Hawking is also rather famous (in general, and in particular) in this regard.

 

I am yet to see a coherent scientific supportive argument for the creation of AI that includes more than just positive thinking, I suppose that time will tell.

(if you are familiar with something like this I will be interested in reading it)



#502
UniformGreyColor

UniformGreyColor
  • Members
  • 1 455 messages

Either way, I don't let the question of determinism vs free will affect the choices I make.

 

Well... that's because you can't, meaning it is impossible. The only way you can let the belief of free will or determinism dictate what decisions you make is if you are constructing your whole belief system around one or the other and I'd argue that at the least that is unhealthy and at worst it is crazy and ridiculous.. and that still is only possible if you believe in free will. Still, as a determinist I don't believe I do anything without some influence outside of myself having an effect on my decisions. The difference between free will and determinism is the same thing as believing you have a choice in everything or nothing. I happen to believe I don't have a choice in anything I do. What I will do I will do and its only change is through what is influencing my behaviour one way or another. If you believe in evolution, there is not room for free will -there just isn't.



#503
AlanC9

AlanC9
  • Members
  • 35 635 messages

The geth look terrible in this regard, worse than animals even, because they have absolutely no cultural achievement and aspire to nothing beyond the meaningless goal of exponentially increasing their own processing power to no end purpose beyond having processing power itself.


Do we actually know what geth talk about when they're not talking to Shepard?

Conversely, the valuing of sapience itself by definition ascribes greater moral value to more "intelligent" creatures, including those within the same species. Personally, I don't see why John using his 150 I.Q. to veg welfare, sit in his basement and play videogames is more deserving of moral rights than Bob with his 90 I.Q. doing construction work 60 hours a week (quite the opposite considering Bob's actions are contributing to a greater aggregate maximization of happiness). Hypothetically, such an ideology would legitimize slavery/subjugation of any "less intelligent" species or group (indeed, this view was used during the imperialist era of human history via White Man's Burden). One must be careful in ascribing it any value that it is not due other than via the varied types of pleasures and suffering it makes possible.

 
So, you're going full Peter Singer here? Haven't read Animal Liberation myself; it's either out of laziness or because I'm scared I'd have to go vegan if I did.
  • Sylvius the Mad aime ceci

#504
Giantdeathrobot

Giantdeathrobot
  • Members
  • 2 942 messages

If Legion donating to a charity is your proof of its good intentions, emotions and sapience, then you should take a closer look at some people doing the same here. For some it's a way to generate good press, they don't care about the charity, they care about their image.

 

Legion doesn't broadcast his purchase at all. You find out about it by literally looking through his credit account and video game backlog in the headquarters of one of the most knowledgeable beings in the galaxy. How is that equivalent to some multi-billionaire giving away X% of his fortune so he can look good on camera at lunch hour, I don't know.



#505
MrFob

MrFob
  • Members
  • 5 413 messages

That's not sentience, at all, It's a couple of poorly constructed false equivalences. The degree to which EDI's and the geth's "preferences" are even their own is nonexistent. They're programmed directives, nothing more. They lack the sensory hardware necessary to achieve consciousness.

What do you base this argument on? If that was the case, they would be classified within the ME universe as VIs, not AIs. Being an AI is per definition the result of a dynamic self-adapting sytem in the ME universe. Besides, how are our preferences and pleasures not at least partially hard coded into our cognitive systems? I am saying you are yet again constructing a false distinction.
Oh and by the way, of course, EDI as well as the geth have sensory hardware. I don't know how anyone could deny that.
 

not really. Their "bodies" are hardware tools not fundamentally different from the relationship a human has with their hammer or wrench.

Their bodies are adaptable and dynamic. So are ours. So what do you base the difference on specifically?
EDIT: Ah, sorry, I think I misunderstood you there. Well, yes, for individual platforms of the geth, this might be true. Still, the program needs to run on a hardware that needs to be maintained, so they do have a connection to the real world (and even if they did not, that doesn't really change much). Sure, their relation to their hardware may be different from ours to our body (I actually acknowledged this already in that alst post) but that doesn't preclude them being sentient or sapient.
In e.g. EDIs case (who - at least before the weirdness in the Citadel DLC - couldn't exist without her computer core/blue box on the Normandy), the ship was her body (she even states this), so I don't really see that much distinction there.
 

The non-existence of the flying spaghetti monster is also not provable, doesn't mean that the concept isn't completely idiotic according to all modern science and not worthy of any serious consideration. Unless you can come up with another structural means of attaining consciousness, the idea of toaster personhood will remain in the realm of fantasy magic akin Tolkien's sentient trees.

You are being hyperbolic but just to indulge you, if the ME universe had a giant spaghetti monster, we could discuss the moral implications of it (in the hypothetical context of the ME universe of course). However it does not, it has AIs which is what we are discussing. The point is, we are discussing the implications of a hypothetical scenario as described in a work of science fiction. If you want to counter by saying "minds without bodies can't have their own motivations" than I can ask "how do you know?", especially if the discussed work stipulates that they do.
 

An irrelevant point because the type of software code structure used has little to do with sentience and consciousness. If there is no necessary interaction between the brain and the body, there is no consciousness.

No, it has everything to do with it. Our brains have the capability to adapt and function in a way that we perceive as sentience mainly because of their incredible potential for neuronal plasticity. Do you think it just pops into existence? A self adaptive code (probably something like a software simulated neural network) would therefore be an absolute requirement. It would be horribly inefficient (which may not be an issue given powerful enough hardware) but not beyond possibility. I am sorry of you cannot imagine the concept but almost all research in the area indicates the theoretical possibility. I recommend this book for further reading (ha, had no idea this was online).
 

The ME universe actually makes hardly any non-arbitrary distinction between the two. It frequently refers to the geth as both, for instance.

As I said, it's true that the writing is sometimes inconsistent. However, I have never encountered the geth being classified as VI, please show a quote.
 

Tali's description of the geth is technobabble nonsense, and she conflates sapience and sentience as the same thing every other word. Trying to make sense of any of that garbage is about as meaningful as discussing the scientific plausiblity of Asimov's positronic brain.

Technobabble doesn't necessarily mean nonsense, if done well it can actually make sense. In Tali case (and only in ME1), I'd say it's half and half. It doesn't make sense entirely but she admits that she is oversimplifying and with a few very sensible assumptions, I think one can make it sensible (but that's another topic). In any case, as I said, the specifics of ME's shortcomings in consistency do not really impact the broader argument as far as I can see.
 

I am yet to see a coherent scientific supportive argument for the creation of AI that includes more than just positive thinking, I suppose that time will tell.
(if you are familiar with something like this I will be interested in reading it)

I completely agree that creating a true AI can be very dangerous and I am not advocating for doing that for the sake of just doing it. However, I think a lot of people have a very flawed understanding of how the creation of an actual AI would probably take place. It seems to me that most people - the ME authors included - seem to think that it will just pop up, being a run of the mill computer one day and a super intelligence the next. This is very likely not going to be the case (check out the link above). A true AI would not be "coded". The code would just lay down the framework, the potential if you will, to learn and build onto that. Everything else, an AI would still have to learn just like a human would. Early versions would probably be severely limited by both software and hardware. This is not something that will happen over night and if we ever get that far, there will probably be a lot of public discussion of what is sentience/sapience, where we draw the threshold for applying our different moral standards, etc. I predict it would/will be a very interesting time.

But we were not really talking about that before, we were talking in the hypothetical context of the ME universe where AI are a fact of life already.



#506
Sylvius the Mad

Sylvius the Mad
  • Members
  • 24 108 messages

I don't think the chracteristic of "life" really has any relevance to morality. I'm not going to start arguing for plant or bacterial rights, for instance.

Sentience? Because hedonistic pleasure and suffering are measurable goods in the physical world that are desirable to every sentient to some degree. All other goods are instruments to this, valuable only in their service to achieving the maximization of such. The capacity for suffering and enjoying things is a prerequisite for having any legitimate moral interests at all, a condition that must be satisfied before we can speak of moral evaluations in any meaningful way. It would be nonsense to say that it was not in the interests of an inanimate object like a stone to be kicked along the road. A stone does not have interests because it cannot suffer. Nothing that we can do to it could possibly make any difference to its welfare. If a being is not sentient, there is nothing to be taken into account, no morality to consider except for our own.

By this reasoning, the Tranquil in Dragon Age have no moral value, because they cannot suffer.

I also don't think you've provided sufficiently strong evidence to persuade someone that sentience has value. It's more like you're presupposing morality, and then finding the thing on which it is most likely to be based.

Unless you're saying morality is a construct, it can be based on whatever we define it to be based on.

Even if I agreed with you here (I don't, single geth programs are independent entities and grossly lack sapience or even animal intelligence) Sapience is not only quite subjective (there's no real operative definition of it in real science), but entirely arbitrary when detached from sentience and thus the enjoyment of higher pleasures it makes possible (the arts, aesthetics, sporting competitions etc).

Is sentience less arbitrary?

Also, the Geth collective is clearly sapient. Unlike humans, the Geth are a gestalt, so it doesn't make sense to evaluate them only individually (which you should absolutely do with humans).

The geth look terrible in this regard, worse than animals even, because they have absolutely no cultural achievement and aspire to nothing beyond the meaningless goal of exponentially increasing their own processing power to no end purpose beyond having processing power itself.

If that's the goal they value, on what basis can we challenge that? They could just as easily challenge the value we assign to art.

Conversely, the valuing of sapience itself by definition ascribes greater moral value to more "intelligent" creatures, including those within the same species. Personally, I don't see why John using his 150 I.Q. to veg welfare, sit in his basement and play videogames is more deserving of moral rights than Bob with his 90 I.Q. doing construction work 60 hours a week (quite the opposite considering Bob's actions are contributing to a greater aggregate maximization of happiness). Hypothetically, such an ideology would legitimize slavery/subjugation of any "less intelligent" species or group (indeed, this view was used during the imperialist era of human history via White Man's Burden). One must be careful in ascribing it any value that it is not due other than via the varied types of pleasures and suffering it makes possible.

That's only true is there's a linear relationship between sapience and value. Instead, it could be some threshold, below which the creature lacks moral value but above which it has some fixed amount.

But even if it were true, would that be bad? Can you justify that criticism without presupposing the value of sentience?

And upon what is the legality of such a law based?

Once the law is enacted, we have no reason to care what its justification is. The justification might inform the details of the law, but the force of the law lies entirely within those details. The basis for the law is wholly irrelevant.

To establish a framework of evaluating actions and promoting acceptable behavior, aimed at improvement of the conditions of existence for all existing sentient beings, as much as is possible, through any means possible.

What reason would an individual have to abide by those rules?
  • Barquiel et Giantdeathrobot aiment ceci

#507
Giantdeathrobot

Giantdeathrobot
  • Members
  • 2 942 messages

By this reasoning, the Tranquil in Dragon Age have no moral value, because they cannot suffer.

 

 

Well said. I don't understand the idea that beings have to suffer in order for morality to be assigned to them.

 

Pain is a survival mechanism. A necessary component of humans being made of flash, so that we can know when our body's integrity is in danger. Otherwise you can ignore that you broke a foot, bite through your tongue, etc. 

 

An advanced synthetic would have no need of pain. Presumably, they are able to determine whenever they have suffered any damage thanks to their internal sensors and systems. Sure, Legion can walk around with a hole in his chest that would cripple/kill any organic thanks to that,  but this doesn't mean it's totally OK to exterminate them. Dismissing a being as unworthy of life because it doesn't work exactly like a human seems ludicrous in a setting with flying jellyfish aliens and walking turtle-men with redundant organs that live more than a millenia.


  • Laughing_Man aime ceci

#508
AlanC9

AlanC9
  • Members
  • 35 635 messages

Well said. I don't understand the idea that beings have to suffer in order for morality to be assigned to them.

Well, it works fine if we assume that morality is merely an extension of will. When you want certain beings to be morally valuable while others are not, you cast around for a rule that draws the line where you want it drawn.

This is a fairly toxic concept, but I'm not sure that it lacks descriptive power. (I've always suspected that Thrasymachus had it right.)

#509
UpUpAway

UpUpAway
  • Members
  • 1 202 messages

I agree that my definition of self-defense is probably broader than yours.

 

 

 

Interesting, however, that your having a broader definition of "self-defense" than I arbitrarily implies (in your mind) that I don't have a sense of the moral value of the concept of coming to the defense of others.  All I've said is that the scenario you set out (i.e. striking first) does not fall within the generally accepted legal definition of "self-defense" in most cases.  While you tout that laws and morality are separate concepts, you don't hesitate to extend my insertion of that "legal definition" as being indicative of my personal moral standing, not only on "self-defense" but also on "defense of others."

 

Like or lump it - Societies use moral judgments to frame the laws they create... and individuals in society use the law to frame their personal moral compasses.



#510
Giantdeathrobot

Giantdeathrobot
  • Members
  • 2 942 messages

Well, it works fine if we assume that morality is merely an extension of will. When you want certain beings to be morally valuable while others are not, you cast around for a rule that draws the line where you want it drawn.

This is a fairly toxic concept, but I'm not sure that it lacks descriptive power. (I've always suspected that Thrasymachus had it right.)

 

I have the impression that the Geth not being sentient (and thus being OK to exterminate) is the premise, not the logical conclusion, of that particular moral argument both in-and-out of universe. With this premise in mind, it is only then that a variety of arguments (that I personally find of very dubious value given what we learn and see in-game) crop up to justify this belief. A lot of it boils down to ''they don't work the same way as us and/or don't see things the same way'' wrapped up in a nicer-looking package which, again, seems like a ludicrous proposition in a setting filled to the brim with bizzare aliens of various flavors.

 

The distinctions always seem incredibly arbitrary to me. The Geth demonstratively design, think, worship, and arguably feel at least some amount of emotions, but because they don't have pain receptors they are totally outside moral calculations all of a sudden? I see no logic in this.



#511
UniformGreyColor

UniformGreyColor
  • Members
  • 1 455 messages

Well, it works fine if we assume that morality is merely an extension of will. When you want certain beings to be morally valuable while others are not, you cast around for a rule that draws the line where you want it drawn.

This is a fairly toxic concept, but I'm not sure that it lacks descriptive power. (I've always suspected that Thrasymachus had it right.)

 

This is tempting to give a really "out there" view of my beliefs, but I will pass deferring to my better judgement.



#512
Nicholas_

Nicholas_
  • Members
  • 100 messages

And then...?

 



#513
In Exile

In Exile
  • Members
  • 28 738 messages




No disagreement, but you clearly didn't read the post. I never said anything about souls having relevance. If you want my opinion, metaphysical concepts of consciousness like "souls" likely don't exist (as if you couldn't tell by my usage of the term "garbage" for the concept).

Contrary to the tired cliche that I won't repeat, unless you are just aimlessly ruminating on tautologies in a philosophy 101 class or hotbox with your stoner friends (such as the pointless discourse above based upon entirely arbitrary deontological definitions of "inaction"), legislation and morality absolutely are connected. You see, here in reality, law is a system for governance of the behaviors of people created by said people, whom adhere to certain systems of morality which define the acceptability of said behaviors. Whether or not you wish they were connected is irrelevant.


Okay, consciousness is unprovable. How do you justify treating any non-human sentient life as entitled to rights?

#514
In Exile

In Exile
  • Members
  • 28 738 messages





High EMS Destroy has no moral conundrums or meaningful downsides. Lol@ synthetic "life" and Leviathan can't enslave anything. The entire galaxy is aware of their existence provided you go to 2181 Desponia, and could smash an asteroid into their stupid planet before they tried anything. I don't see their artifacts in the background of the ending slides, so I don't know how one could come to this conclusion.

Satisfied?

Indeed, and it also works the other way i.e. legislation reinforces acceptable morality.

Ignoring our own universe where this question is elementary, the assumption that ME synthetics even meet the definition of "life" in their own universe is categorically a false one (which, hilariously and despite obvious writer intent to the contrary, EDI and Legion/VI even inform us of themselves with the Reaper code plot and synthesis "I am alive" garbage). Anyway, life itself doesn't really ascribe much value independent of sentience, which is not a characteristic that any ME synthetic has displayed (indeed, all have consistently displayed the opposite).

Insofar as you wish to compare the two with such a false equivalence, given examples of ME universe synthetic "life" cannot experience pleasure or suffering, and therefore such machines are outside of the moral calculus themselves. They only effect it based upon the consequences of their existence upon sentients, which when the former are allowed to exist in an uncontrolled state (according to all in universe evidence) leads to massive levels of unnecessary suffering. That alone would be enough to grant them less value than the sentient lifeforms. Frankly, considering their entire purpose is merely to improve sentient existence rather than being necessary for enabling it, I'd put them below crop plants and the symbiotic bacteria in my gut in terms of "life" that is of moral importance were I to consider them worthy of such a term.


Does it not? Go murder someone and inform me of its lack of "material" moral effect.

Morality is contingent upon material consequences. Any system that doesn't recognize this is inherently flawed and not worth considering.


This definition is stupid. Animals feel pleasure and suffering. On what moral basis do you justify denying them full and equivalent rights? How can we deny the pleasure and pain experiencing dog the right to vote but not the krogan? This definition is beyond worthless in the real world.

#515
Laughing_Man

Laughing_Man
  • Members
  • 3 664 messages

Interesting, however, that your having a broader definition of "self-defense" than I arbitrarily implies (in your mind) that I don't have a sense of the moral value of the concept of coming to the defense of others.  All I've said is that the scenario you set out (i.e. striking first) does not fall within the generally accepted legal definition of "self-defense" in most cases.  While you tout that laws and morality are separate concepts, you don't hesitate to extend my insertion of that "legal definition" as being indicative of my personal moral standing, not only on "self-defense" but also on "defense of others."

 

Like or lump it - Societies use moral judgments to frame the laws they create... and individuals in society use the law to frame their personal moral compasses.

 

I didn't really mean to get into a personal contest of my morality Vs. your morality.

 

I'm content with my moral system, it might not be perfect in the eyes of others, but I have the final say on my personal views, just as you have on yours.

 

A moral system shouldn't be something that prevents you from defending yourself against clear aggression and danger or prevents you from interrupting predatory behavior by criminals; the precise definition and border between self-defense Vs. justified first strike, is of less importance to me than the overall "spirit of the law".

 

So call it whatever you want, self-defense or something else, but in my eyes the right to defend yourself from certain catastrophic threats (i.e. extreme circumstances), exceeds the normal allowances of every day morality, and is an inextricable part of the idea of self-defense.



#516
UpUpAway

UpUpAway
  • Members
  • 1 202 messages

I didn't really mean to get into a personal contest of my morality Vs. your morality.

 

I'm content with my moral system, it might not be perfect in the eyes of others, but I have the final say on my personal views, just as you have on yours.

 

A moral system shouldn't be something that prevents you from defending yourself against clear aggression and danger or prevents you from interrupting predatory behavior by criminals; the precise definition and border between self-defense Vs. justified first strike, is of less importance to me than the overall "spirit of the law".

 

So call it whatever you want, self-defense or something else, but in my eyes the right to defend yourself from certain catastrophic threats (i.e. extreme circumstances), exceeds the normal allowances of every day morality, and is an inextricable part of the idea of self-defense.

 

OK, we're making progress here.  The thing is that for a nation to instigate a "first strike" against another nation, they not only have to generally have some sort consensus as to the nature of the threat within their own nation (depending of course on the nature of the nation's government) and in many respects they also answer to international law - which is based on a rather unbalanced moral stance and, imbedded in that, the ability to discount certain points of view as being less than human.  As I said, I'm not trying to even remotely become involved in this ongoing debate about sentience, AI rights, etc. (I AM finding it a very interesting read.)  What I was inserting was the idea that 1) legalities AND moralities do matter when nations make such judgments and 2) That, within the ME trilogy, Bioware probably had to consider not only the idea of AI sentience, but also that the situation could be interpreted by some players as being more representative of people treating other people as being less than people.  If one looks at the Geth/Quarian issue through that sort of symbolism... the whole moral dilemma towards resolving it can possibly shift quite a bit.



#517
von uber

von uber
  • Members
  • 5 520 messages

Just out of interest, can someone explain to me why the geth have to man their fighters with mobile platforms? Or even have space for mobile platforms to get in?


  • Sylvius the Mad et Laughing_Man aiment ceci

#518
Iakus

Iakus
  • Members
  • 30 297 messages

Just out of interest, can someone explain to me why the geth have to man their fighters with mobile platforms? Or even have space for mobile platforms to get in?

I could see needing space for mobile platforms for repair purposes.

 

But to pilot a ship?  Nope, can't think of a reason.


  • Sylvius the Mad aime ceci

#519
sH0tgUn jUliA

sH0tgUn jUliA
  • Members
  • 16 812 messages

Why would having the Normandy blow a terrorist's ship out of the sky be an ultimate "Renegade" action?  It seems to me it would be the ultimate Paragon one.  You just saved the hostages AND you absolutely prevented, not only Balak himself, but also the entire terrorist cell from hurting anyone else.  Putting a bullet in Balak's head only ends Balak... and the hostages.

 

Oh, I get it... the bigger the explosion the more "Renegade" the action is.

 

For the sake of debate... let's toughen things up a bit.  Situation is that you know Balak is likely to lead you to his boss... an even bigger terrorist.  Do you let him go or let him kill the hostages just so you can have the pleasure of putting a bullet in his head... making it more likely that his boss will kill more people before you can catch him?

 

The situation is presented two ways:

 

First scenario.  The situation is made clear that there is no way you're going to catch the boss without actually letting Balak go.

Second scenario.  The situation offers an option of killing Balak and then making a deal with one of his henchmen to find the boss.

 

WHICH is really a dilemma and which is almost a no brainer?

 

You don't understand BioWare's renegade/paragon system. Under their system, letting him go then blowing his ship out of the sky, you committed the ultimate renegade: you double-crossed him. You gave him your word that you would let him go free, then you got your way (the hostages), then turned around and killed him anyway.

 

Unfortunately, in the scenarios offered, the one pulling Balak's strings is probably safe somewhere on Khar'shan or one of the Batarian colony worlds. Making a strike there is going to involve violating Batarian space. The Alliance cannot be implicated. The Secretary will disavow any knowledge of your actions. Chances of getting close will be small but a team of contracted sociopaths willing to get the job done no matter the collateral damage might. 

 

The problem with the scenario is that you cannot have certain knowledge at the time that Balak will meet face to face with the one pulling the strings. That's bad writing if you do. Balak is not that high up the food chain. He also just failed in his mission which decreased that likelihood. 

 

So if you have this certain knowledge that letting Balak go does lead you to his boss, the cost for letting Balak go should be high. Say he blows up one of the high population domes on Caleston, and kills a couple hundred thousand people. After all he has to move up the food chain before he gets to meet his boss.

 

The probability of one of his henchmen leading you to the boss is even less.

 

Given that the "ultimate renegade option" I presented isn't available, I still put a bullet in his head.



#520
UpUpAway

UpUpAway
  • Members
  • 1 202 messages

You don't understand BioWare's renegade/paragon system. Under their system, letting him go then blowing his ship out of the sky, you committed the ultimate renegade: you double-crossed him. You gave him your word that you would let him go free, then you got your way (the hostages), then turned around and killed him anyway.

 

Unfortunately, in the scenarios offered, the one pulling Balak's strings is probably safe somewhere on Khar'shan or one of the Batarian colony worlds. Making a strike there is going to involve violating Batarian space. The Alliance cannot be implicated. The Secretary will disavow any knowledge of your actions. Chances of getting close will be small but a team of contracted sociopaths willing to get the job done no matter the collateral damage might. 

 

The problem with the scenario is that you cannot have certain knowledge at the time that Balak will meet face to face with the one pulling the strings. That's bad writing if you do. Balak is not that high up the food chain. He also just failed in his mission which decreased that likelihood. 

 

So if you have this certain knowledge that letting Balak go does lead you to his boss, the cost for letting Balak go should be high. Say he blows up one of the high population domes on Caleston, and kills a couple hundred thousand people. After all he has to move up the food chain before he gets to meet his boss.

 

The probability of one of his henchmen leading you to the boss is even less.

 

Given that the "ultimate renegade option" I presented isn't available, I still put a bullet in his head.

 

I'm not saying that I don't frequently put a bullet in his head.  I'm not saying that ultimate paragon is the way Bioware scored it... they obviously didn't.  I am saying that in general terms society tends to think of successfully capturing the bad guy while also saving any hostages to be a more paragon act than allowing the hostages to be killed in order to capture the bad guy.  There are events in human history that uphold that notion... the "better" heroes were able to do both and heroes who were not able to similarly spare hostages were decried in the media.  Similarly, the notion of letting a bad guy go temporarily to work up the food chain is a dilemma that law enforcement faces all the time.  I'll leave you all to continue arguing whether this is a "good" stance for a society to take or not.

 

Again, I AM NOT TELLING ANYONE HERE HOW THEY SHOULD RESOLVE THESE MORAL QUESTIONS.  The only person one should have to justify their choices in this game to is themselves.  On that note, I'm not going to justify any of my choices... and truth be told, I often flip them about depending on how I'm structuring my Shep's character during that playthrough... and he/she even doesn't always make those decisions based on anything more important than how they might affect his immediate crew (i.e. sometimes he thinks with his crotch).

 

That doesn't negate the fact that, in structuring the Balak dilemma, Bioware gave an "out" to the player that enables the player to believe they are selecting "both" options at the same time.  When that option of selecting "both" exists... a well costructed dilemma does not exist (IMO).  A similar issue exists with the Geth/Quarian war dilemma (I'm not talking about the synthesis ending here, but the war)... you can solve the war for both sides (i.e. peace), so for many players the "dilemma" of choosing AI vs. organics is just removed.  In that case, however, the dilemma can be reconstructed losely based on "war or peace" scenarios.  There IS indeed a hint of that in the Balak issue, but it's still really not that well developed from a "literary" perspective.  The genophage issue is better constructed in that Shep either cures the genophage or he/she doesn't.  The fact that he/she can attempt to lie to Wrex and the consequences of that lie don't alter the fact that he/she MUST decide on one of the options.

 

The original question of this thread was really would we like to see more of these sort of issues in ME:A.  My answer was Yeah, I would... even if some of them are not constructed perfectly (or even very well).  I still enjoy them.



#521
Xen

Xen
  • Members
  • 647 messages

Do we actually know what geth talk about when they're not talking to Shepard?

 
So, you're going full Peter Singer here? Haven't read Animal Liberation myself; it's either out of laziness or because I'm scared I'd have to go vegan if I did.

They don't "talk" at all, apart from Legion (and post Reaper code when they all magically grow vocal synthesizers because now they've for some reason the need to exchange information via audio) and I've no desire to create hamfisted false analogy to pointlessly compare what they do to sapient organic communication or artistic expression.
 

 

So, you're going full Peter Singer here? Haven't read Animal Liberation myself; it's either out of laziness or because I'm scared I'd have to go vegan if I did.

Some of my ideology is ripped from Singer, including a belief in the inherent value of all sentients, though I don't go full vegan simply because my individual refusal to eat certain things will have no material effect upon overall the state of human induced animal suffering, but induces needless requirements making the attainment of sustenance more difficult upon myself in the interim.
 

 

What do you base this argument on? If that was the case, they would be classified within the ME universe as VIs, not AIs. Being an AI is per definition the result of a dynamic self-adapting sytem in the ME universe. Besides, how are our preferences and pleasures not at least partially hard coded into our cognitive systems? I am saying you are yet again constructing a false distinction.
Oh and by the way, of course, EDI as well as the geth have sensory hardware. I don't know how anyone could deny that.

how about the fact that the geth's outputs can universally and predictably be controlled by something as simple as introducing a rounding error in basic runtimes (Heretics and their virus). Or how about this?
https://www.youtube....N_jbvso#t=5m58s
^yeah, conscious, sentient beings no doubt. There's no question that they're toasters even with their magic Reaper code, let alone without. Input a command for anything, no matter how asinine, that bypasses their version of Malwarebytes, and they've no choice but to follow it. 

In EDI's case, you can do the same thing with "hardware blocks" or "shackles" (get it, its a totally subtle metaphor for slavery! You're obviously a mean racist if you disagree! Now look how cute it and Joker are together!).

Of course their hardware has sensors, in the same way my car has various temperature, fuel and emissions sensors that are all managed by a CPU to maintain optimal performance in varied external conditions. That doesn't mean it's sentient or conscious. What physical structural components do they have that are analogous to a brain/nervous system that would make such a thing possible, and what purpose did the quarians/humans have for including such a useless feature on a machine designed for hazardous, dangerous and backbreaking labour that a sentient couldn't tolerate, number crunching or being a disposable piece of military hardware?

The writers elected not to tell us, or even make up some technobabble like "positronic brain", ergo I will continue to laugh at the idea that ME synthetics are somehow magically growing sentience out of thin air.

 

Their bodies are adaptable and dynamic. So are ours. So what do you base the difference on specifically?

EDIT: Ah, sorry, I think I misunderstood you there. Well, yes, for individual platforms of the geth, this might be true. Still, the program needs to run on a hardware that needs to be maintained, so they do have a connection to the real world (and even if they did not, that doesn't really change much). Sure, their relation to their hardware may be different from ours to our body (I actually acknowledged this already in that alst post) but that doesn't preclude them being sentient or sapient.
In e.g. EDIs case (who - at least before the weirdness in the Citadel DLC - couldn't exist without her computer core/blue box on the Normandy), the ship was her body (she even states this), so I don't really see that much distinction there.

No they aren't. Their "bodies" are inanimate objects with practically no connection to the software controlling them. Transferring the software from a server/bluebox into a platform doesn't fundamentally change it or give it any new capabilites beyond more varied means to interact with the physical world. Meanwhile, try pulling the brain and nervous system out of an organic without killing it. We apparently can't even do this in the MEverse with the Crucible magic, seeing as Shep has to die for the WBE garbage to happen in the Control ending.

Again, its far more analogous to the relationship between my car's ECU and its mechanical components than the connection between a sentient being's consciousness and body.
 

You are being hyperbolic but just to indulge you, if the ME universe had a giant spaghetti monster, we could discuss the moral implications of it (in the hypothetical context of the ME universe of course). However it does not, it has AIs which is what we are discussing. The point is, we are discussing the implications of a hypothetical scenario as described in a work of science fiction. If you want to counter by saying "minds without bodies can't have their own motivations" than I can ask "how do you know?", especially if the discussed work stipulates that they do.

I know because either explicitly or by implication that's the consensus of both modern and MEverse science according to every in universe expert on synthetic technologies (Xen, Archer, Sanders, Shu Qian) except one permutation of Tali (but only because knowing Legion made her experience the feelz), who doesn't even know the difference between sapience and sentience. If you have new information or a new in universe authority to support your claims, would you be so kind as to share it? Where does the discussed work "stipulate" that they do? Despite obvious writer intent, it seem to in fact stipulate the opposite, with arguments to the contrary entirely based upon Sheps stupid false equivalances (lol teh fearless machine went to teh reaperz cuz it were ascared of teh quarianz!!1one) and nonsensical pathos appeals like the retarded and utterly irrelevant "soul" question.

Until you provide this source, I'm going to operate on the more reasonable assumption that every action taken by a synthetic in the MEverse can be explained by pure programming alone. Being purely digital systems, they don't and can't even come close to defeating the Chinese room. Frankly, I'd be surprised if the vast majority of networked geth in the consensus could even pass the Turing test at any given point in time

 

No, it has everything to do with it. Our brains have the capability to adapt and function in a way that we perceive as sentience mainly because of their incredible potential for neuronal plasticity. Do you think it just pops into existence? A self adaptive code (probably something like a software simulated neural network) would therefore be an absolute requirement. It would be horribly inefficient (which may not be an issue given powerful enough hardware) but not beyond possibility. I am sorry of you cannot imagine the concept but almost all research in the area indicates the theoretical possibility. I recommend this book for further reading (ha, had no idea this was online).

an ironic question coming from someone attempting to argue what you are, to say the least. No I haven't read that nor heard of its authors, but there's plenty of transhumanist garbage out there for me to choose from, and I doubt this will change my mind if it is likewise. If I'm looking for actual information on the subject rather than ideologically charged new age religious nonsense, I'll stay with the works of renowned authors in the field like Norvig and Olshausen.
 

As I said, it's true that the writing is sometimes inconsistent. However, I have never encountered the geth being classified as VI, please show a quote.

 

Technobabble doesn't necessarily mean nonsense, if done well it can actually make sense. In Tali case (and only in ME1), I'd say it's half and half. It doesn't make sense entirely but she admits that she is oversimplifying and with a few very sensible assumptions, I think one can make it sensible (but that's another topic). In any case, as I said, the specifics of ME's shortcomings in consistency do not really impact the broader argument as far as I can see.
 

I completely agree that creating a true AI can be very dangerous and I am not advocating for doing that for the sake of just doing it. However, I think a lot of people have a very flawed understanding of how the creation of an actual AI would probably take place. It seems to me that most people - the ME authors included - seem to think that it will just pop up, being a run of the mill computer one day and a super intelligence the next. This is very likely not going to be the case (check out the link above). A true AI would not be "coded". The code would just lay down the framework, the potential if you will, to learn and build onto that. Everything else, an AI would still have to learn just like a human would. Early versions would probably be severely limited by both software and hardware. This is not something that will happen over night and if we ever get that far, there will probably be a lot of public discussion of what is sentience/sapience, where we draw the threshold for applying our different moral standards, etc. I predict it would/will be a very interesting time.

They are described as such in Revelation during the narration on the Geth Rebellion. Can't be bothered to find the novel right now for the exact quote.

What Tali does on the geth constantly can be described as oversimplification and false analogy, particularly in ME1 (she gets a bit better in ME2). She often served as the writers' mouthpiece on the subject, except they know even less about it than she does. 

Don't patronize to me. The Bioware writers did that quite enough by attempting to imply I was a racist because I don't accept the same views on the subject as they (while hilariously meeting their own definition of "racist" by admitting that the geth and EDI aren't "alive" without the addition of plot magic in the 3rd game, and were by implication mindless toasters beforehand).
 



#522
Laughing_Man

Laughing_Man
  • Members
  • 3 664 messages

I'm not saying that ultimate paragon is the way Bioware scored it...

 

The way I see it, the ultimate paragon in the eyes of Bioware is appealing to the good in your enemies, "the powah of luv" and all that nonsense.

 

Because the consequences of letting Balak go were having him cooperate with you come ME3, instead of the more realistic scenario of him actually being more successful in his next terror attack.(or getting killed trying)

 

It's somewhat similar to the childish concept in Harry Potter, where Good Guys only use disarm and stun spells against Bad Guys that fight to kill.

Or the Naruto concept of using the power of retarded friendship to subvert your enemies.

 

It's not always like this, but they tend to go on forays into rainbows and unicorn land from time to time.


  • Il Divo aime ceci

#523
UniformGreyColor

UniformGreyColor
  • Members
  • 1 455 messages

You don't understand BioWare's renegade/paragon system. Under their system, letting him go then blowing his ship out of the sky, you committed the ultimate renegade: you double-crossed him. You gave him your word that you would let him go free, then you got your way (the hostages), then turned around and killed him anyway.

 

Unfortunately, in the scenarios offered, the one pulling Balak's strings is probably safe somewhere on Khar'shan or one of the Batarian colony worlds. Making a strike there is going to involve violating Batarian space. The Alliance cannot be implicated. The Secretary will disavow any knowledge of your actions. Chances of getting close will be small but a team of contracted sociopaths willing to get the job done no matter the collateral damage might. 

 

The problem with the scenario is that you cannot have certain knowledge at the time that Balak will meet face to face with the one pulling the strings. That's bad writing if you do. Balak is not that high up the food chain. He also just failed in his mission which decreased that likelihood. 

 

So if you have this certain knowledge that letting Balak go does lead you to his boss, the cost for letting Balak go should be high. Say he blows up one of the high population domes on Caleston, and kills a couple hundred thousand people. After all he has to move up the food chain before he gets to meet his boss.

 

The probability of one of his henchmen leading you to the boss is even less.

 

Given that the "ultimate renegade option" I presented isn't available, I still put a bullet in his head.

 

A bullet in his head. Well put.



#524
UpUpAway

UpUpAway
  • Members
  • 1 202 messages

The way I see it, the ultimate paragon in the eyes of Bioware is appealing to the good in your enemies, "the powah of luv" and all that nonsense.

 

Because the consequences of letting Balak go were having him cooperate with you come ME3, instead of the more realistic scenario of him actually being more successful in his next terror attack.(or getting killed trying)

 

It's somewhat similar to the childish concept in Harry Potter, where Good Guys only use disarm and stun spells against Bad Guys that fight to kill.

Or the Naruto concept of using the power of retarded friendship to subvert your enemies.

 

It's not always like this, but they tend to go on forays into rainbows and unicorn land from time to time.

IMO, if a player makes any of the choices in the game just to play into Bioware's scoring system, they've already handed over control of all their  choices in the game to Bioware...  and there is absolutely no point in complaining about the endings.  The player then is doing nothing more than reading a book written by Bioware throughout the entire game... and I don't presume to demand that any author change how they decide to end their book.  To expect that Bioware can both score something and express no preferences of their own is, IMO, unrealistic at best.

 

That said, I do retract my earlier statement that Bioware obviously didn't score it that way.  They did score the "catch and save both ultimately" option as being more paragon than just letting Balak go since they awarded 2 additional paragon points for selecting 'Only temporarily' when talking later with Simon Atwell.  Whether that changes your perspective on what is paragon in the eyes of Bioware is up to you.



#525
Iakus

Iakus
  • Members
  • 30 297 messages

The way I see it, the ultimate paragon in the eyes of Bioware is appealing to the good in your enemies, "the powah of luv" and all that nonsense.

 

Because the consequences of letting Balak go were having him cooperate with you come ME3, instead of the more realistic scenario of him actually being more successful in his next terror attack.(or getting killed trying)

 

It's somewhat similar to the childish concept in Harry Potter, where Good Guys only use disarm and stun spells against Bad Guys that fight to kill.

Or the Naruto concept of using the power of retarded friendship to subvert your enemies.

 

It's not always like this, but they tend to go on forays into rainbows and unicorn land from time to time.

Hey, by the end of the series Harry Potter used two of the three Unforgivable Curses  Harry should have gotten life in Azkaban once the dust settled  :P

 

But at any rate, it's not like even the most goody-goody paladin Shep didn't get to mow down dozens or even hundreds of faceless enemies over the course of the game.  Not just reaper husks of geth platforms, but organics as well.  humans, turians, krogan, etc.