I have a degree in it. It was a really fun experience.No, I have not actually attended a formal Philosophy class
Moral Dilemmas: Yea or Nay?
#451
Posté 03 mars 2016 - 08:08
#452
Posté 03 mars 2016 - 08:11
I have a degree in it. It was a really fun experience.
I read elsewhere on the forum that you believe in free will. I don't, I'm a determinist. Could you explain your stance?
#453
Posté 03 mars 2016 - 08:13
I don't believe in free will. I don't think I have any basis to favour one model over the other.I read elsewhere on the forum that you believe in free will. I don't, I'm a determinist. Could you explain your stance?
However, if determininism is true, I have no direct control over my behaviour that wasn't caused elsewhere, so then I would fin my life uninteresting.
So, I choose to act as if I believe in free will, because that's more fun.
#454
Posté 03 mars 2016 - 08:16
I don't believe in free will. I don't think I have any basis to favour one model over the other.
However, if determininism is true, I have no direct control over my behaviour that wasn't caused elsewhere, so then I would fin my life uninteresting.
So, I choose to act as if I believe in free will, because that's more fun.
Yes, I can see that point. Living as though you have free will whether that is true or not help moral I guess.
#455
Posté 03 mars 2016 - 08:16
Remember that time when you wrote a long post relevant to the topic and no one commented on it because arguing semantics was more important?
High EMS Destroy has no moral conundrums or meaningful downsides. Lol@ synthetic "life" and Leviathan can't enslave anything. The entire galaxy is aware of their existence provided you go to 2181 Desponia, and could smash an asteroid into their stupid planet before they tried anything. I don't see their artifacts in the background of the ending slides, so I don't know how one could come to this conclusion.
Satisfied?
So they're connected in the sense thar morality causes legislation?
Indeed, and it also works the other way i.e. legislation reinforces acceptable morality.
What's the non-metaphysical-garbage case for valuing organic life over other life, again?
Ignoring our own universe where this question is elementary, the assumption that ME synthetics even meet the definition of "life" in their own universe is categorically a false one (which, hilariously and despite obvious writer intent to the contrary, EDI and Legion/VI even inform us of themselves with the Reaper code plot and synthesis "I am alive" garbage). Anyway, life itself doesn't really ascribe much value independent of sentience, which is not a characteristic that any ME synthetic has displayed (indeed, all have consistently displayed the opposite).
Insofar as you wish to compare the two with such a false equivalence, given examples of ME universe synthetic "life" cannot experience pleasure or suffering, and therefore such machines are outside of the moral calculus themselves. They only effect it based upon the consequences of their existence upon sentients, which when the former are allowed to exist in an uncontrolled state (according to all in universe evidence) leads to massive levels of unnecessary suffering. That alone would be enough to grant them less value than the sentient lifeforms. Frankly, considering their entire purpose is merely to improve sentient existence rather than being necessary for enabling it, I'd put them below crop plants and the symbiotic bacteria in my gut in terms of "life" that is of moral importance were I to consider them worthy of such a term.
Morality is wholly irrelevant to law. Law is a system to direct behaviour. The supposed moral basis for it has no material effect.
Does it not? Go murder someone and inform me of its lack of "material" moral effect.
Morality is contingent upon material consequences. Any system that doesn't recognize this is inherently flawed and not worth considering.
#456
Posté 03 mars 2016 - 08:29
Ignoring our own where this question is elementary, the assumption that ME synthetics even meet the definition of "life" in their own universe is categorically a false one (which, hilariously and despite obvious writer intent, EDI and Legion/VI even inform us of themselves with the Reaper code plot and synthesis "I am alive" garbage). Anyway, life itself doesn't really ascribe much value independent of sentience, which is not a characteristic that any ME synthetic has displayed (indeed, all have consistently displayed the opposite).
I agree that EDI's "Now I am alive" phrases all over ME3 are idiotic. That should have never happened after Joker unshackled her in ME2.
Insofar as you wish to compare the two with such a false equivalence, given examples of ME universe synthetic "life" cannot experience pleasure or suffering, and therefore such machines are outside of the moral calculus themselves. They only effect it based upon the consequences of their existence upon sentients, which when the former are allowed to exist in an uncontrolled state (according to all in universe evidence) leads to massive levels of unnecessary suffering. That alone would be enough to grant them less value than the sentient lifeforms. Frankly, considering their entire purpose is merely to improve sentient existence rather than being necessary for enabling it, I'd put them below crop plants and the symbiotic bacteria in my gut in terms of "life" that is of moral importance were I to consider them worthy of such a term.
I question these points however and especially their implications. Have you ever watched this?
- Iakus aime ceci
#457
Posté 03 mars 2016 - 09:23
I question these points however and especially their implications. Have you ever watched this?
I have. Much like similar ones in ME, Picard's arguments against Riker are based almost entirely upon false equivalences and generating pathos (teh feelz) rather than actually addressing the nature of sentience and whether Data physically possess it (though it has a much better case than anything in the ME universe given that it structurally possess a fictitious "positronic brain" to enable such, and was purposefully designed to at least emulate sentience as closely as technologically possible to a human, rather than being designed as a mere tool for a specific task that doesn't require nor benefit from sentience like the geth or EDI).
I don't see anything inherently wrong with Maddox's idea of studying Data and using it to build similar creations with its unique capabilities for other tasks, and I still wouldn't see it as an injustice even if Data were confirmed beyond reasonable doubt to be sentient, due simply to the aggregate benefits trumping his individual whims. In that case Data would simply be the child in the metaphorical Omelas, so to speak.
As an aside, it's a bit of a silly premise for an episode. Data has been receiving the same rights and privileges as any other person up to this point, and something like this should have been brought up a lot sooner, such as when it was applying to the Academy, receiving its Starfleet commission or being promoted to Lieutenant Commander and therefore legal authority over people. How would it do any of this if this question of Data's personhood hadn't already been decided? Can an Etch-a-Sketch apply to Starfleet simply by me scratching out a formal request on it or something? A legal precedent should've already been in need of establishment by those actions alone.
- DaemionMoadrin aime ceci
#458
Posté 03 mars 2016 - 09:24
Why are we assuming that moral worth stems from life? Or from sentience, for that matter?Ignoring our own universe where this question is elementary, the assumption that ME synthetics even meet the definition of "life" in their own universe is categorically a false one (which, hilariously and despite obvious writer intent to the contrary, EDI and Legion/VI even inform us of themselves with the Reaper code plot and synthesis "I am alive" garbage). Anyway, life itself doesn't really ascribe much value independent of sentience, which is not a characteristic that any ME synthetic has displayed (indeed, all have consistently displayed the opposite).
Insofar as you wish to compare the two with such a false equivalence, given examples of ME universe synthetic "life" cannot experience pleasure or suffering, and therefore such machines are outside of the moral calculus themselves. They only effect it based upon the consequences of their existence upon sentients, which when the former are allowed to exist in an uncontrolled state (according to all in universe evidence) leads to massive levels of unnecessary suffering. That alone would be enough to grant them less value than the sentient lifeforms. Frankly, considering their entire purpose is merely to improve sentient existence rather than being necessary for enabling it, I'd put them below crop plants and the symbiotic bacteria in my gut in terms of "life" that is of moral importance were I to consider them worthy of such a term.
If we instead use the (I think more defensible) standard of sapience, the Geth start to look pretty good.
The law reacts to my actions based on their legality, not their morality.Does it not? Go murder someone and inform me of its lack of "material" moral effect.
Why? Because it would otherwise lack prescriptive force? Does that matter?Morality is contingent upon material consequences.
What do you think the point of morality is?Any system that doesn't recognize this is inherently flawed and not worth considering.
#459
Posté 03 mars 2016 - 09:36
I have. Much like similar ones in ME, Picard's arguments against Riker are based almost entirely upon false equivalences and generating pathos (teh feelz) rather than actually addressing the nature of sentience and whether Data physically possess it (though it has a much better case than anything in the ME universe given that it structurally possess a fictitious "positronic brain" to enable such, and was purposefully designed to at least emulate sentience as closely as technologically possible to a human, rather than being designed as a mere tool for a specific task that doesn't require nor benefit from sentience like the geth or EDI).
I agree, I think true AI is pretty much a impossibility but it can occur if you actually model something off of a organic brain. For example Guri in Starwars to my eye is one of the few actual machine intelligence I would actually address a person but that is just because she has basically the most advanced simulated thought processors and learning matrices that are in existence and are reflected in her price tag, she's a self capable learning entity that rivals a starship in price because you actually are giving the basis for a true sentience logical structuring, preferences and thought are a self contained entity, but its RARE for me to actually view artificially created life as equal to natural life, there are only THREE cases I am even willing to debate the merits on and none exist in Mass Effect.
Guri from Star Wars, AGIS from Persona and Thinking machines from Dune.
Just because they actually can act as individuals and form their own logic from not only prior experience but on a intellectual level befitting them as sapient, its not all data and numbers, and that is where AI's truly fumble in a lot of settings, they act like nothing more then any macro I could cobble together to collect data.
#460
Posté 03 mars 2016 - 09:47
- HurraFTP aime ceci
#461
Posté 03 mars 2016 - 09:53
I have. Much like similar ones in ME, Picard's arguments against Riker are based almost entirely upon false equivalences and generating pathos (teh feelz) rather than actually addressing the nature of sentience and whether Data physically possess it (though it has a much better case than anything in the ME universe given that it structurally possess a fictitious "positronic brain" to enable such, and was purposefully designed to at least emulate sentience as closely as technologically possible to a human, rather than being designed as a mere tool for a specific task that doesn't require nor benefit from sentience like the geth or EDI).
I don't see anything inherently wrong with Maddox's idea of studying Data and using it to build similar creations with its unique capabilities for other tasks, and I still wouldn't see it as an injustice even if Data were confirmed beyond reasonable doubt to be sentient, due simply to the aggregate benefits trumping his individual whims. In that case Data would simply be the child in the metaphorical Omelas, so to speak.
As an aside, it's a bit of a silly premise for an episode. Data has been receiving the same rights and privileges as any other person up to this point, and something like this should have been brought up a lot sooner, such as when it was applying to the Academy, receiving its Starfleet commission or being promoted to Lieutenant Commander and therefore legal authority over people. How would it do any of this if this question of Data's personhood hadn't already been decided? Can an Etch-a-Sketch apply to Starfleet simply by me scratching out a formal request on it or something? A legal precedent should've already been in need of establishment by those actions alone.
I think it is you who is making a false distinction. You are saying that the geth or EDI do not have the capability to feel pleasure or to suffer. Yet, there is nothing to indicate this and there are indications that the opposite is the case.
- EDI directly states that she has motivations and goals and that when these are promoted, she gets positive feedback which organics might call pleasure.
- Similarly the geth have preferences (such as maximizing their processing power) and situations they reject (such as serving Nazana, direct interaction with organics, etc.) and they actively try to get to a state where their preferences are realized and their rejections are averted. They also have an equivalent to sentiment (they preserve Rannoch and compare it to Arlington cemetery). This alone indicates the pursuit of happiness (or pleasure if you will).
The implementation of these mechanics may be very different to organics but it is there. And you are missing one of the deciding points of Picard's argument, which is neither based of pathos nor a false equivalency: If you allow the devaluation of sentient beings in any shape or form you are creating the moral basis for allowing institutionalized slavery and racism. It doesn't matter whether or not an AI race has been created with a purpose, when they achieved sentience they must be given the right and privilege to define their own purpose. Why? Because the argument could just as well be used on an organic race. Say the council would have subdued the humans upon first contact and made them into a slave race. They would still breed us because they need a sustainable labor force. We would be created with the purpose of being slaves. Is this defensible to you? If not, than ask yourself whether or not it should be the case for a race of Datas, EDIs or geth.
Now you didn't really defend putting the existing AIs back into slavery, you defend destroying them without a second thought, without feeling guilty about it and without remorse. I say, now that they are created, now that they are sentient and self aware, this attitude is devaluing them just as much.
Just for the record, I also choose destroy but I do it because I don't see a viable alternative. I still see the destruction of the geth and EDI as genocide, just one that cannot be prevented. If it were the turians, the asari or the humans that would be killed by destroy instead of the geth, it wouldn't make much difference in my mind.
- Laughing_Man, HurraFTP et Giantdeathrobot aiment ceci
#462
Posté 03 mars 2016 - 10:00
I agree, I think true AI is pretty much a impossibility but it can occur if you actually model something off of a organic brain. For example Guri in Starwars to my eye is one of the few actual machine intelligence I would actually address a person but that is just because she has basically the most advanced simulated thought processors and learning matrices that are in existence and are reflected in her price tag, she's a self capable learning entity that rivals a starship in price because you actually are giving the basis for a true sentience logical structuring, preferences and thought are a self contained entity, but its RARE for me to actually view artificially created life as equal to natural life, there are only THREE cases I am even willing to debate the merits on and none exist in Mass Effect.
Guri from Star Wars, AGIS from Persona and Thinking machines from Dune.
Just because they actually can act as individuals and form their own logic from not only prior experience but on a intellectual level befitting them as sapient, its not all data and numbers, and that is where AI's truly fumble in a lot of settings, they act like nothing more then any macro I could cobble together to collect data.
What does brain structure have to do with it? Do you think asari brains work exactly the same as ours? Turians? Drell? For that matter, do you think the brains of two different humans work exactly the same? What makes the difference? Is having different neurotransmitters enough to devalue sentient life? If we use copper wires or silicon semiconductors instead of insulated semipermeable membranes built from a dual layer of fat molecules to conduct electrophysiological signal processing, is that enough? Where is the borderline between what you count as life and what you call a machine?
And as for "act as individuals and form their own logic from not only prior experience", what does this mean? How do we do anything different? How does EDI's decision to sacrifice herself for her crew if necessary or the geth's decision to try out the hoax with the salrian goddess qualitatively differ from an organic mind?
If you want to use these arguments to write off an entire race, than you better have dam good answers to these questions.
- Laughing_Man aime ceci
#463
Posté 03 mars 2016 - 10:04
*snip*
While I agree with most of your points, there is always the need to consider the danger of AI becoming advanced enough quickly enough to consider organic life forms something akin to pests, and proceed to destroy them. (or use as living batteries...)
The Leviathans / Catalyst may not have been wrong about the danger of Synthetics, even if they were wrong about everything else...
#464
Posté 03 mars 2016 - 10:08
While I agree with most of your points, there is always the need to consider the danger of AI becoming advanced enough quickly enough to consider organic life forms something akin to pests, and proceed to destroy them. (or use as living batteries...)
The Leviathans / Catalyst may not have been wrong about the danger of Synthetics, even if they were wrong about everything else...
Well, there is the same danger with organic lifeforms (look at how the rachni wars were interpreted before we knew that Sovereign may have been involved there). For that matter, look at how certain ideologies in our own history deal with other humans that don't follow them (various religious and racial "cleansings" come to mind). I don't think this is at all limited to AIs. If one of them gets this idea, we'll have to deal with it in the same terms as we deal with it in the case of organics -> on a case by case basis. It may be risky but I don't see a logical non-hypocritical argument for any other way to do it.
- HurraFTP aime ceci
#465
Posté 03 mars 2016 - 10:11
Yes, I can see that point. Living as though you have free will whether that is true or not help moral I guess.
If I might jump in here, I hold the position that it doesn't matter whether or not we have true free will.
If we have free will, we get to make our own decisions.
If we don't, then we still effectively get to make our own decisions, since we don't know what choice we are supposed to make in any given scenario and whatever we pick will be what we were predetermined to choose.
Either way, I don't let the question of determinism vs free will affect the choices I make.
- Sylvius the Mad aime ceci
#466
Posté 03 mars 2016 - 10:15
Well, there is the same danger with organic lifeforms (look at how the rachni wars were interpreted before we knew that Sovereign may have been involved there). For that matter, look at how certain ideologies deal with other humans that don't follow them. I don't think this is at all limited to AIs. If one of them gets this idea, we'll have to deal with it in the same terms as we deal with it in the case of organics -> on a case by case basis. It may be risky but I don't see a logical argument for any other way to do it.
Obviously, it's just that with Synthetics the power balance is simply so far out there, that in a Skynet-like situation you are basically fracked.
They don't need to sleep, they can outrun you, out-think you, live on worlds without atmosphere, survive the vacuum of space,
they can just make an endless number of themselves very quickly, etc.
#467
Posté 03 mars 2016 - 10:19
The moment where we learned (in LotSB) that Legion bought the ultimate edition of a game whose profits went to Eden Prime disaster relief, but never played it, is the moment I figured that his sentience was past debate. Unthinking, unfeeling machines simply do not behave like that.
- Laughing_Man, MrFob et HurraFTP aiment ceci
#468
Posté 03 mars 2016 - 10:20
Obviously, it's just that with Synthetics the power balance is simply so far out there, that in a Skynet-like situation you are basically fracked.
They don't need to sleep, they can outrun you, out-think you, live on worlds without atmosphere, survive the vacuum of space,
they can just make an endless number of themselves very quickly, etc.
Well, then we better not try to create them in the first place, I guess. You are making a very practical argument that is logical but this thread is about the morality of our actions. A preemptive strike on the chance that the species (AI or not) could become hostile and dangerous as the quarians launched against the geth may have been practically sensible but IMO it was still morally wrong.
If we follow this logic, we should also immediately wipe out the yahg, who are physically superior and proably follow up with the krogan and the asari first chance we get.
#469
Posté 03 mars 2016 - 10:26
Well, then we better not try to create them in the first place, I guess. You are making a very practical argument that is logical but this thread is about the morality of our actions. A preemptive strikeon the chance that the AI could become hostile as the quarians launched against the geth may have been practically sensible but IMO it was still morally wrong.
No no, that's what I was trying to point out, self-preservation is part of the moral consideration as well.
I mean, according to my morals, acting to defend yourself is a basic moral right. And in some extreme situations, striking first is merely the only option.
#470
Posté 03 mars 2016 - 10:29
The moment where we learned (in LotSB) that Legion bought the ultimate edition of a game whose profits went to Eden Prime disaster relief, but never played it, is the moment I figured that his sentience was past debate. Unthinking, unfeeling machines simply do not behave like that.
Unless of course this was some part of an elaborate attempt on the Geth part to make themselves look less threatening to organics.
I mean, they can out-think organics.
- Sylvius the Mad aime ceci
#471
Posté 03 mars 2016 - 10:33
acting to defend yourself is a basic moral right. And in some extreme situation, striking first is merely the only option.
To me, this is a paradoxical statement. Either you defend yourself (which you can only do when you are attacked) or you act hostile to danger (which means striking first). Claiming that preemptive strikes are defensive has been the justification for many wars (and many tragedies). Who is defending if you strike first? I'd say it sure as hell isn't you.
I'll grant that it is a grey area, there have been situations where nations have clearly goaded other ones into war and where preparations for aggression were so obvious, that it can be argued that a first strike may have been defensive. Even then, I'd rather prepare a defense than an offense. But in any case, I haven't seen any such grey area in Mass Effect yet and it surely doesn't apply to the quarians/geth on Rannoch. Asking a philosophical question is not a declaration of war.
Unless of course this was some part of an elaborate attempt on the Geth part to make themselves look less threatening to organics.
I mean, the can out-think organics.
I don't know about you but in my book, "innocent until proven guilty" is a concept.
#472
Posté 03 mars 2016 - 10:33
No no, that's what I was trying to point out, self-preservation is part of the moral consideration as well.
I mean, according to my morals, acting to defend yourself is a basic moral right. And in some extreme situation, striking first is merely the only option.
Just to cut in here - In general, the principle of self-defense in law precludes "striking first." Self-defense using force must be limited to a force only sufficient to repel the attack, not the threat of an attack. "The force used in self-defense may be sufficient for protection from apparent harm (not just an empty verbal threat) or to halt any danger from attack, but cannot be an excuse to continue the attack or use excessive force."
http://legal-diction...om/Self-Defense
In can be complex and war is certainly a different playing field... but the moral premise behind the idea of "self-defense" is that the one who strikes first is more likely the aggressor and not the defender.
#473
Posté 03 mars 2016 - 10:34
Obviously, it's just that with Synthetics the power balance is simply so far out there, that in a Skynet-like situation you are basically fracked.
They don't need to sleep, they can outrun you, out-think you, live on worlds without atmosphere, survive the vacuum of space,
they can just make an endless number of themselves very quickly, etc.
We could say something similar of the Krogan, what with their super toughness and explosive breeding rate.
The Geth also did not display those characteristics. They never became a force to be reckoned with; heck the Quarians wipe the floor with them pre-Reaper upgrade. Thanks in large part to Xen's technobabbly contermeasure, but still.
In particular, I assume rapid reproduction of synthetics is not that easy as they require a fairly hefty amount of ressources and infrastructure to produce. One big advantage they do have over organics is that once a Geth is manufactured, they are good to go, whereas a human requires almost two decades of growth and education before they become productive members of society. Of course that number varies mong different species, but the basic principle remains the same.
#474
Posté 03 mars 2016 - 10:44
Unless of course this was some part of an elaborate attempt on the Geth part to make themselves look less threatening to organics.
I mean, the can out-think organics.
You only learn about that by happening to infiltrate the Shadow Broker's ship and happening to look at his dossiers, which happen to interest themselves to your squadmate's lives in general and into Legion's hobby as a gamer in particular.
That's the kind of ''deception'' that's just too well hidden to be effective methinks.
#475
Posté 03 mars 2016 - 10:44
Asking a philosophical question is not a declaration of war.
I can agree with that. Still the implications of this question cannot simply be ignored.
Regarding striking first, see next section.
Just to cut in here - In general, the principle of self-defense in law precludes "striking first." Self-defense using force must be limited to a force only sufficient to repel the attack, not the threat of an attack.
Law is not equal to morality, I may be forced to follow all the laws of the country I live in, but I don't necessarily agree with all of them.
And in some situations you simply cannot afford to restrict yourself in this manner, especially when the threat to you is existential, close to existential,
or the enemy is simply much more powerful, numerous, etc.
Again, I'm talking about extreme circumstances here.
We could say something similar of the Krogan, what with their super toughness and explosive breeding rate.
The Geth also did not display those characteristics. They never became a force to be reckoned with; heck the Quarians wipe the floor with them pre-Reaper upgrade. Thanks in large part to Xen's technobabbly contermeasure, but still.
In particular, I assume rapid reproduction of synthetics is not that easy as they require a fairly hefty amount of ressources and infrastructure to produce. One big advantage they do have over organics is that once a Geth is manufactured, they are good to go, whereas a human requires almost two decades of growth and education before they become productive members of society. Of course that number varies mong different species, but the basic principle remains the same.
I would agree about the Krogan, their explosive birthrate is something the writers possibly didn't consider all the way.
Regarding the Geth, I would argue that they were "nerfed" to make them seem more like the sympathetic underdog.
They should have been able to rip to shreds the extranet, destroy economies, take over spaceships mid-fight, etc.





Retour en haut





