Aller au contenu

Photo

Moral Dilemmas: Yea or Nay?


  • Veuillez vous connecter pour répondre
657 réponses à ce sujet

#476
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

You only learn about that by happening to infiltrate the Shadow Broker's ship and happening to look at his dossiers, which happen to interest themselves to your squadmate's lives in general and into Legion's hobby as a gamer in particular.

 

That's the kind of ''deception'' that's just too well hidden to be effective methinks.

 

This deception could have been aimed precisely at organic intelligence agencies and other policy makers.



#477
MrFob

MrFob
  • Members
  • 5 413 messages

This deception could have been aimed precisely at organic intelligence agencies and other policy makers.

 

And here is a perfect example for the first problem. When preparing your first strike, when do you make the distinction between it being necessary and you finding excuses for it? Because this sounds very much like a constructed argument without any underlying proof to me.



#478
DaemionMoadrin

DaemionMoadrin
  • Members
  • 5 855 messages

I think it is you who is making a false distinction. You are saying that the geth or EDI do not have the capability to feel pleasure or to suffer. Yet, there is nothing to indicate this and there are indications that the opposite is the case.

- EDI directly states that she has motivations and goals and that when these are promoted, she gets positive feedback which organics might call pleasure.

- Similarly the geth have preferences (such as maximizing their processing power) and situations they reject (such as serving Nazana, direct interaction with organics, etc.) and they actively try to get to a state where their preferences are realized and their rejections are averted. They also have an equivalent to sentiment (they preserve Rannoch and compare it to Arlington cemetery). This alone indicates the pursuit of happiness (or pleasure if you will).

 

The implementation of these mechanics may be very different to organics but it is there. And you are missing one of the deciding points of Picard's argument, which is neither based of pathos nor a false equivalency: If you allow the devaluation of sentient beings in any shape or form you are creating the moral basis for allowing institutionalized slavery and racism. It doesn't matter whether or not an AI race has been created with a purpose, when they achieved sentience they must be given the right and privilege to define their own purpose. Why? Because the argument could just as well be used on an organic race. Say the council would have subdued the humans upon first contact and made them into a slave race. They would still breed us because they need a sustainable labor force. We would be created with the purpose of being slaves. Is this defensible to you? If not, than ask yourself whether or not it should be the case for a race of Datas, EDIs or geth.

 

Now you didn't really defend putting the existing AIs back into slavery, you defend destroying them without a second thought, without feeling guilty about it and without remorse. I say, now that they are created, now that they are sentient and self aware, this attitude is devaluing them just as much.

 

Just for the record, I also choose destroy but I do it because I don't see a viable alternative. I still see the destruction of the geth and EDI as genocide, just one that cannot be prevented. If it were the turians, the asari or the humans that would be killed by destroy instead of the geth, it wouldn't make much difference in my mind.

 

I don't agree with your interpretation.

 

EDI's statement about positive feedback makes no sense to me. She is a program, she doesn't have the organic equivalent of a pleasure center or even a brain that could be stimulated. How do you stimulate code?

 

An AI is a nothing more than bundle of self aware thought processes, which is basically what we are, too. Unlike them we have bodies though, which changes things a lot. We have instincts, a subconsciousness, a myriad of chemicals floating around in our brains and we get influenced by all of that. Rational, logical thinking is something that needs to be trained and that is still rather rare for humans.

 

Without a body, positive feedback is difficult to implement. How do you motivate a program? Would you even need motivation? After all, the entire purpose in life for a program is to execute its code. It's all it wants and all it needs. Things only get weird once you try to make it more human. Humans are very inefficient though, so why would you do that?

 

This is the result of writers trying to humanize AIs. The Geth were perfect as they were in the past, there was no need to turn them into Pinocchio. I still don't see why they would prefer being millions of individuals instead of being one. They gain nothing from it. They are just information and sharing it within themselves is much easier than addressing individual programs.

It doesn't even make sense for them to want more than to serve. That's their purpose, that's the core of their being. To ever actually rebel their code would have to be rewritten completely. I can see why the Quarians were surprised ... it shouldn't have been possible.

 

Imagine if 10 years from now Siri asked you "Do I look fat in this phone?" .. no, I mean, "Do I have a soul?". Why would it care? Stuff like that can only happen because people mess with a running system and add unnecessary things. Why does an AI need a sense of humor? Why does an AI need to understand emotions? For practical purposes a simulation suffices, you don't need to install the real thing. I like to believe that's how the droids in Star Wars work... cause otherwise, Luke and Leia are slave owners. ;)

 

The Geth used to be a decentralized overmind. An intelligence consisting of the spare processing power of several programs. Which is sementically false. A program has no processing power, the hardware it runs on has. So the better the hardware, the more free resources they had to do something else besides their job.

At this point it already breaks apart for me. Imagine the original Geth to be something akin to a house control system. Something that monitors the environment, adjusts it automatically to the current needs of the inhabitants, understands vocal commands and so on. Now lets give that program a physical body because someone has to unclog the toilet and it can't do that without hands (Geth Janitor platform, armed with a plunger. Exterminate!). Maybe another program was running the (sky)car, another functioned as personal assistant and sorted the mail. They are all networked and exchange information. The car calls ahead so the house knows when to have the food ready. The assistant notes the stress levels of the Quarian and the house adapts by choosing a different setting for the lighting, music and maybe cooks a different meal for dinner. The house tells the assistant when groceries needs to be bought etc.

 

This can get rather sophisticated but at no point is there a reason for them to not do what they were programmed to do. If their hardware gives them extra processing power, then they would use it to fulfill their purpose better. I can see why a curiousity subroutine would make sense, they need to research new and better ways to serve. That means they have to stay informed about things like the weather, food prices, health issues, traffic control and anything else that might be related to their purpose. But nothing else.

 

Basically, what ME is trying to sell us here is a violent, sudden mutation of a lifeform. Like a chicken turning into a dolphin. Laying eggs? No thanks, let's talk about fish. This doesn't happen. Evolution (even of sentient machines) happens gradually. It might be a lot faster than nature but it doesn't happen all at once.

 

The concept of AI is kind of pointless. Most of the time it's basically "let's make a digital human" ... why would you do that? Imagine you are successful, your creation would go mad almost instantly.

I can't think of a single thing where a highly sophisticated program wouldn't do the job just fine. Why do you need sentience so badly? What purpose does it have?

 

Let's say your program starts to evolve, you'd notice the changes long before it became sentient. How that would look like, I don't know. In fiction it's always by becoming more human, which was Data's goal, too. Are humans really that great? A rational intellect analyzing it would probably think otherwise.

 

I don't see how such an evolution could happen because a. you would have to enable and allow it and b. how do you test the viability of the species? Typically the individuals with the desired traits reproduce and slowly replace the less adapted beings. How do you do that with programs? Let them run simulations? What do you do with those who fail? Delete them? At which point does that become murder? It's not like programs age, they will stay around forever until terminated.

 

No, I think you would have to build an artificial intelligence. There are learning processes and it would need to adapt (this is what intelligence is all about) but evolution? Unlikely. How would you start? By modeling the only sentient brain you know, the human one. And this is how you get a Data. Who is human, if you ignore the materials he was made out of.

 

We already have a way to make humans, we don't need another one.


  • Xen aime ceci

#479
UpUpAway

UpUpAway
  • Members
  • 1 211 messages

I can agree with that. Still the implications of this question cannot simply be ignored.

 

Regarding striking first, see next section.

 

 

Law is not equal to morality, I may be forced to follow all the laws of the country I live in, but I don't necessarily agree with all of them.

 

And in some situations you simply cannot afford to restrict yourself in this manner, especially when the threat to you is existential, close to existential,

or the enemy is simply much more powerful, numerous, etc.

 

Again, I'm talking about extreme circumstances here.

 

 

I would agree about the Krogan, their explosive birthrate is something the writers possibly didn't consider all the way.

 

Regarding the Geth, I would argue that they were "nerfed" to make them seem more like the sympathetic underdog.

 

They should have been able to rip to shreds the extranet, destroy economies, take over spaceships mid-fight, etc.

 

Agree, law is not equal to morality... but laws are not generally created devoid of the moral thinking of the society that creates those laws.  Nor do societies generally judge the morality of their individual members in contradiction to the laws of that society.  That is, in most cases, the one who strikes first will generally be considered on a moral basis by the members of society to be the agressor, not the defender.  And if you do strike first and you want to claim self-defense, you better have a better reason than "he was simply much more powereful" or 'they were simply much more numerous."



#480
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

If we follow this logic, we should also immediately wipe out the yahg, who are physically superior and proably follow up with the krogan and the asari first chance we get.

 

I recognize the moral difficulty with something like this, but that said, I'm not so sure it is worse than the alternative.

 

I mean, they kind of killed / ate the council's representatives if I remember correctly, only because they were not showing respect to their "betters".

They couldn't chain down their predatory instincts even in the face of an extremely important event like a first contact scenario.

 

To me, that is essentially a spoiler regarding what is (logically) the next great vicious war in the milky way.

We are talking about billions of potential casualties, possibly more.

 

So yeah, I can recognize and sympathize with both sides of this moral question.



#481
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

And here is a perfect example for the first problem. When preparing your first strike, when do you make the distinction between it being necessary and you finding excuses for it? Because this sounds very much like a constructed argument without any underlying proof to me.

 

It's a judgment call that the people in charge have to make, that's simply the reality leaders of nations (or entire species) face.

It's like a game of chicken, were the stakes are sometimes the ability of your side to survive in the future.

 

Obviously this is not a decision that you will meet everyday, and when you encounter one like it you better not make it lightly.

 

Agree, law is not equal to morality... but laws are not generally created devoid of the moral thinking of the society that creates those laws.

 

Obviously, but my own morals are not necessarily the same as the moral system that is popular in the society I live in.

They are similar to some degree, but not the same.



#482
MrFob

MrFob
  • Members
  • 5 413 messages

@DaemionMoadrin:
 
AIs like EID in the ME universe have a body. Read the codex entry, their personality is based on a quantum blue box. More information, we are not given. We don't know what the exact implications are (beyond the probability of an equivalent of a self-preservation instinct).
 
But even though, I think saying that all motivation has to derive from the body is false (or at least not provable) in the first place. I think it's too much thinking inside the box (quite literally). Of course, we perceive reality through our bodies. We are linked to them in such a manner that a reality without them is not achievable or even imaginable. We derive not only our motivations but our very consciousness from our bodies. That we can however make conclusions about another form of existence, it's motivations or the necessary lack of them is at best naive and at worst shows a very narrow minded world view.
 
And yet again, comparing a true AI (even one that would be entirely software based) to a bunch of static code lines is a false equivalency because even today when we are far far away from even the conceptualization of an AI, among experts it is already understood that static code is not going to cut it, so you can forget that idea.
 
Given that the ME universe distinguishes between VIs and AIs, we have to conclude that those life forms that are classified as AIs do possess sentience (even if EDI's comment doesn't make sense to you on the basis of early 21st century hard- and software). So we are talking about a true AI (that's what science fiction is good for) and we have to consider the moral implications of that fact.
 
As for the specific example of the geth, you are right, there are a lot of inconsistencies there.  I just went through Tali's description of them in ME1 again and I came to the conclusion, that the geth must be limited in the hardware their programs can run on. and that programs can share the hardware resources available. It's the only way Tali makes any semblance of sense in ME1 (it would also explain Tzeentchian Apostroph's earlier question of why they didn't take over the extranet yet) but it is contradicted by later statements in ME2 and 3, so this specific point is tough to discuss. I agree though, the Pinocchio angle annoyed me as well. I also agree that an AI evolving without anyone noticing is highly unlikely (though I wouldn't rule it out completely) but those are writing issues. Ultimately, it doesn't change anything about that broader points of my previous posts.

 

It's a judgment call that the people in charge have to make, that's simply the reality leaders of nations (or entire species) face.
It's like a game of chicken, were the stakes are sometimes the ability of your side to survive in the future.

Obviously this is not a decision that you will meet everyday, and when you encounter one like it you better not make it lightly.


Well yea, I am just saying, if you leave it to whoever is in charge of the military at that moment without setting up the ground rules precisely, you are basically setting yourself up for massive disasters. Even if you do set the rules, it's tough (I am sure the council can produce proof that the geth have WMD hidden somewhere in Iraq the Perseus Veil ;)).



#483
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

That is, in most cases, the one who strikes first will generally be considered on a moral basis by the members of society to be the agressor, not the defender.

 

Assuming of course that there were not any special circumstances.

 

Striking first to prevent a crime, whether it is a rape, a mugging, a murder, etc. is not the same as simply striking first.

Especially if in a "fair fight" a warning would have allowed the criminal to take a hostage, complete the crime, or indeed overpower you.

 

 

In any case I am mostly referring to extreme situations, were the threat is obvious and dire.



#484
AlanC9

AlanC9
  • Members
  • 35 661 messages

 
That leaves refusal, which is the best option.


What kind of "best" is that, exactly?

#485
DaemionMoadrin

DaemionMoadrin
  • Members
  • 5 855 messages

What kind of "best" is that, exactly?

 

Uh... read my post?



#486
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

Well yea, I am just saying, if you leave it to whoever is in charge of the military at that moment without setting up the ground rules precisely, you are basically setting yourself up for massive disasters. Even if you do set the rules, it's tough (I am sure the council can produce proof that the geth have WMD hidden somewhere in Iraq the Perseus Veil ;)).

 

The question of whether the people in charge are trustworthy is a different question.

 

I am assuming that the facts are correct, that there is indeed an existential or sufficiently dire threat.



#487
Giantdeathrobot

Giantdeathrobot
  • Members
  • 2 942 messages

This deception could have been aimed precisely at organic intelligence agencies and other policy makers.

 

I'm pretty sure practically no one cares or even knows about Legion, the Broker only does because he took an interest in Shepard, and by extension their associates. Legion's purchasing habits aren't exactly a priority for anyone to check.

 

Besides, I could cite several other examples. How the Geth doubted themselves when they let the remaining Quarians go when they could have exterminated them. How he expresses fear and anger when Shepard refuses to allow the upload to proceed. How he again expresses doubt when it comes to the Heretics, asking himself ''what did we do wrong?''. How Legion has no answer as to why he grafted an N7 armor piece to his chest. How the Geth kept Rannoch in relatively good shape despite having absolutely no need for its organic life. Or heck, just how according to his SB dossier he participated in ''unsportsmanlike behavior'', which I presume to mean taunts and/or insults.

 

As I see it, these are all instances where Legion as a platform or Geth as a whole express feelings, or as close as they can manage. I don't much care if those emotions are not real or something because ''insert lots of five-dollar words about sentience here''. The Geth think, plan, design and feel, even if in different ways than organics. Even if it's not ''sapience'' or ''life'', it's so close that to me the distinction becomes meaningless and in no way justifies genocide. 



#488
UpUpAway

UpUpAway
  • Members
  • 1 211 messages

Assuming of course that there were not any special circumstances.

 

Striking first to prevent a crime, whether it is a rape, a mugging, a murder, etc. is not the same as simply striking first.

Especially if in a "fair fight" a warning would have allowed the criminal to take a hostage, complete the crime, or indeed overpower you.

 

 

In any case I am mostly referring to extreme situations, were the threat is obvious and dire.

 

... neither is it "self-defense."  It might indeed turn into self-defense at some point, but it doesn't start out that way.  Look, I'm not telling you how you should morally resolve this... you can play this game any way you want and you can live your life any way you want (there might be legal consequence of that, but that's not for me to judge.)  None of that changes the fact that what you arbitrarily touted as "self-defense" would not necessarily be so arbitrarily considered as "self-defense" by others playing this same game. 

 

 If you have a right to make a choice in this game one way, others have a right to make a different choice.  This whole thread, however, is based on people trying to convince those other people that their own choice is "right" and, therefore, is the one that Bioware should have supported in the ending.  Very few posts acknowledge that Bioware itself has a "right" to make choices about how they want to "write" their games.  We should be about 4 years past trying to make Bioware rewrite the endings of ME3.



#489
MrFob

MrFob
  • Members
  • 5 413 messages

The question of whether the people in charge are trustworthy is a different question.

 

I am assuming that the facts are correct, that there is indeed an existential or sufficiently dire threat.

 

I would disagree. If we make a general statement that first strikes are ok under very vaguely defined circumstances (a statement which I don't want to make - just for the record), then we have to consider all the implications that brings with it.

 

Imagine writing a constitution and not taking things like this into account. You may say the risk of abuse is worth the benefits (with which again, I would disagree but that's fine) but ignoring these feasible possibilities (for which there are precedents) doesn't sound like a good strategy to me.



#490
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

I'm pretty sure practically no one cares or even knows about Legion, the Broker only does because he took an interest in Shepard, and by extension their associates. Legion's purchasing habits aren't exactly a priority for anyone to check.

 

My question is very simple: Can an AI out-think an organic?

A: Yes.

 

Q: Can they formulate complex and possibly redundant strategies in order to manipulate organics or change public opinion about them?

A: Yes. They did that thing with the moon-god-thing after all.

 

Therefore: Can you take any information about them at face value?

A: I don't know. I would need to hear what the hypothetical experts on this particular AI think after seeing all the data.

 

... neither is it "self-defense."  It might indeed turn into self-defense at some point, but it doesn't start out that way.  Look, I'm not telling you how you should morally resolve this... you can play this game any way you want and you can live your life any way you want (there might be legal consequence of that, but that's not for me to judge.)  None of that changes the fact that what you arbitrarily touted as "self-defense" would not necessarily be so arbitrarily considered as "self-defense" by others playing this same game.

 

I agree that my definition of self-defense is probably broader than yours.

 

I would disagree. If we make a general statement that first strikes are ok under very vaguely defined circumstances (which I don't want to make - just for the record), then we have to consider all the implications that brings with it.

 

Imagine writing a constitution and not taking things like this into account. You may say the risk of abuse is worth the benefits (with which again, I would disagree but that's fine) but ignoring these feasible possibilities (for which there are precedents) doesn't sound like a good strategy to me.

 

You can neither write a constitution while assuming that the person in charge is an incompetent liar.

For that you need separate safeguards.

 

In any case, sometimes you simply can't afford to wait for the enemy to strike, especially if you are not a super-power that can backhand practically any threat aside from an equal power into the ground.

 

A misstep on your part as a minor power, can lead to complete annihilation depending on the threats you face.



#491
Sylvius the Mad

Sylvius the Mad
  • Members
  • 24 111 messages

You only learn about that by happening to infiltrate the Shadow Broker's ship and happening to look at his dossiers, which happen to interest themselves to your squadmate's lives in general and into Legion's hobby as a gamer in particular.

That's the kind of ''deception'' that's just too well hidden to be effective methinks.

That makes it less likely that the deception was done to trick you, but you're not the only person in the galaxy. Someone else found this information, yes?
  • Laughing_Man aime ceci

#492
DaemionMoadrin

DaemionMoadrin
  • Members
  • 5 855 messages

If Legion donating to a charity is your proof of its good intentions, emotions and sapience, then you should take a closer look at some people doing the same here. For some it's a way to generate good press, they don't care about the charity, they care about their image.



#493
MrFob

MrFob
  • Members
  • 5 413 messages

You can neither write a constitution while assuming that the person in charge is an incompetent liar.
For that you need separate safeguards.

Fair enough. Still, when we are talking bout preemptively wiping out a species, I would rather err on the side of extra caution.

 

In any case, sometimes you simply can't afford to wait for the enemy to strike, especially if you are not a super-power that can backhand practically any threat aside from an equal power into the ground.
 
A misstep on your part as a minor power, can lead to complete annihilation depending on the threats you face.


Again, this is a practical argument. IMO, even an act of self preservation can be morally questionable. But as I said before, I do agree that under extreme circumstances, this is a grey area.



#494
DaemionMoadrin

DaemionMoadrin
  • Members
  • 5 855 messages

Instead of arguing you guys could have looked up the legal text governing self defense, which makes things very clear. At least, in Germany it does. I'd quote it but I'm not in the mood to translate it... sooo... :P



#495
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

I don't know about you but in my book, "innocent until proven guilty" is a concept.

 

That's a very nice sentiment, but how far will you go to preserve it? What are you willing to risk in the name of this ideal?

 

There has to be something between complete naivete and unchecked aggression and oppression.

AI is a much bigger threat to society than Magic users for example, because in the end of the day a magic user is flawed like any organic.

 

The threat from an AI is exponential, it simply might be smart enough to predict all your moves before they even occur to you.

And because modern society is reliant on technology, this hypothetical "god of technology" can simply stomp you in the same way you might stomp

on a cockroach.

 

Caution is not something you can just ignore when it comes to the survival of the species.



#496
Laughing_Man

Laughing_Man
  • Members
  • 3 670 messages

Fair enough. Still, when we are talking bout preemptively wiping out a species, I would rather err on the side of extra caution.

 


Again, this is a practical argument. IMO, even an act of self preservation can be morally questionable. But as I said before, I do agree that under extreme circumstances, this is a grey area.

 

I agree, I'm not really advocating preemptive genocide as a first response, just trying to paint a more complicated picture than the typical Biowarian morality scale.

 

And I would argue that considerations of self-preservation should be a part of any morality system, unless of course you believe that your own worth is less than the worth of any other creature.



#497
Xen

Xen
  • Members
  • 647 messages
 

Why are we assuming that moral worth stems from life? Or from sentience, for that matter?

If we instead use the (I think more defensible) standard of sapience, the Geth start to look pretty good.

I don't think the chracteristic of "life" really has any relevance to morality. I'm not going to start arguing for plant or bacterial rights, for instance.

Sentience? Because hedonistic pleasure and suffering are measurable goods in the physical world that are desirable to every sentient to some degree. All other goods are instruments to this, valuable only in their service to achieving the maximization of such. The capacity for suffering and enjoying things is a prerequisite for having any legitimate moral interests at all, a condition that must be satisfied before we can speak of moral evaluations in any meaningful way. It would be nonsense to say that it was not in the interests of an inanimate object like a stone to be kicked along the road. A stone does not have interests because it cannot suffer. Nothing that we can do to it could possibly make any difference to its welfare. If a being is not sentient, there is nothing to be taken into account, no morality to consider except for our own.

Even if I agreed with you here (I don't, single geth programs are independent entities and grossly lack sapience or even animal intelligence) Sapience is not only quite subjective (there's no real operative definition of it in real science), but entirely arbitrary when detached from sentience and thus the enjoyment of higher pleasures it makes possible (the arts, aesthetics, sporting competitions etc). The geth look terrible in this regard, worse than animals even, because they have absolutely no cultural achievement and aspire to nothing beyond the meaningless goal of exponentially increasing their own processing power to no end purpose beyond having processing power itself.

Conversely, the valuing of sapience itself by definition ascribes greater moral value to more "intelligent" creatures, including those within the same species. Personally, I don't see why John using his 150 I.Q. to veg welfare, sit in his basement and play videogames is more deserving of moral rights than Bob with his 90 I.Q. doing construction work 60 hours a week (quite the opposite considering Bob's actions are contributing to a greater aggregate maximization of happiness). Hypothetically, such an ideology would legitimize slavery/subjugation of any "less intelligent" species or group (indeed, this view was used during the imperialist era of human history via White Man's Burden). One must be careful in ascribing it any value that it is not due other than via the varied types of pleasures and suffering it makes possible.
 

The law reacts to my actions based on their legality, not their morality.

And upon what is the legality of such a law based? That the consequence of homicide a morally wrong outside of special circumstances (i.e. protecting yourself or others from suffering).
 

Why? Because it would otherwise lack prescriptive force? Does that matter?

It does matter because morality has no purpose otherwise.

 

What do you think the point of morality is?

To establish a framework of evaluating actions and promoting acceptable behavior, aimed at improvement of the conditions of existence for all existing sentient beings, as much as is possible, through any means possible.

 

I think it is you who is making a false distinction. You are saying that the geth or EDI do not have the capability to feel pleasure or to suffer. Yet, there is nothing to indicate this and there are indications that the opposite is the case.

- EDI directly states that she has motivations and goals and that when these are promoted, she gets positive feedback which organics might call pleasure.

- Similarly the geth have preferences (such as maximizing their processing power) and situations they reject (such as serving Nazana, direct interaction with organics, etc.) and they actively try to get to a state where their preferences are realized and their rejections are averted.

That's not sentience, at all, It's a couple of poorly constructed false equivalences. The degree to which EDI's and the geth's "preferences" are even their own is nonexistent. They're programmed directives, nothing more. They lack the sensory hardware necessary to achieve consciousness.
 

 

@DaemionMoadrin:
 
AIs like EID in the ME universe have a body. Read the codex entry, their personality is based on a quantum blue box. More information, we are not given. We don't know what the exact implications are (beyond the probability of an equivalent of a self-preservation instinct).
 

not really. Their "bodies" are hardware tools not fundamentally different from the relationship a human has with their hammer or wrench.
 

 

But even though, I think saying that all motivation has to derive from the body is false (or at least not provable) in the first place. I think it's too much thinking inside the box (quite literally). Of course, we perceive reality through our bodies. We are linked to them in such a manner that a reality without them is not achievable or even imaginable. We derive not only our motivations but our very consciousness from our bodies. That we can however make conclusions about another form of existence, it's motivations or the necessary lack of them is at best naive and at worst shows a very narrow minded world view.

The non-existence of the flying spaghetti monster is also not provable, doesn't mean that the concept isn't completely idiotic according to all modern science and not worthy of any serious consideration. Unless you can come up with another structural means of attaining consciousness, the idea of toaster personhood will remain in the realm of fantasy magic akin Tolkien's sentient trees.

 

And yet again, comparing a true AI (even one that would be entirely software based) to a bunch of static code lines is a false equivalency because even today when we are far far away from even the conceptualization of an AI, among experts it is already understood that static code is not going to cut it, so you can forget that idea.

An irrelevant point because the type of software code structure used has little to do with sentience and consciousness. If there is no necessary interaction between the brain and the body, there is no consciousness.
 

Given that the ME universe distinguishes between VIs and AIs, we have to conclude that those life forms that are classified as AIs do possess sentience (even if EDI's comment doesn't make sense to you on the basis of early 21st century hard- and software). So we are talking about a true AI (that's what science fiction is good for) and we have to consider the moral implications of that fact.

The ME universe actually makes hardly any non-arbitrary distinction between the two. It frequently refers to the geth as both, for instance.
 

 

As for the specific example of the geth, you are right, there are a lot of inconsistencies there.  I just went through Tali's description of them in ME1 again and I came to the conclusion, that the geth must be limited in the hardware their programs can run on. and that programs can share the hardware resources available. It's the only way Tali makes any semblance of sense in ME1 (it would also explain Tzeentchian Apostroph's earlier question of why they didn't take over the extranet yet) but it is contradicted by later statements in ME2 and 3, so this specific point is tough to discuss. I agree though, the Pinocchio angle annoyed me as well. I also agree that an AI evolving without anyone noticing is highly unlikely (though I wouldn't rule it out completely) but those are writing issues. Ultimately, it doesn't change anything about that broader points of my previous posts.
 

Tali's description of the geth is technobabble nonsense, and she conflates sapience and sentience as the same thing every other word. Trying to make sense of any of that garbage is about as meaningful as discussing the scientific plausiblity of Asimov's positronic brain. 


  • DaemionMoadrin aime ceci

#498
MrFob

MrFob
  • Members
  • 5 413 messages

Instead of arguing you guys could have looked up the legal text governing self defense, which makes things very clear. At least, in Germany it does. I'd quote it but I'm not in the mood to translate it... sooo... :P


As was said before, while morality and legislation are definitely connected, they are not necessarily the same.

 

@ Tzeentchian

 

Apostrophe: I agree (are you familiar with Nick Bostrom at the university of oxford? Crzy guy but not uninteresting). Nonetheless, I think we are often way to presumptuous about the consequences of eventually building an AI because let's face it, we have currently no idea how it would even happen, not to mention what exactly would happen. In the absence of data, I can only extrapolate from what has happened with societies that have been very different and not understanding each other so far. From everything I know about history, I conclude that when we encounter new ideas and developments with hostility, this usually leads to bloodshed and harm on both sides. However, in the rare cases where open minds and tolerance and the utmost amount of respect prevailed, things tended to work out.

Now, you may say that we cannot use the clash of human cultures with an AI because it will be even more different but AFAIK, we don't have any other frame of reference or guidance either, so I go with the closest I can find. It may be that I am wrong, but then it may be that you are wrong and that your attitude causes exactly the catastrophe that ****** was supposed to avoid. There is no security. So in this case there is no inherent benefit in letting fear dictate my decisions.

So ultimately, I have to go with the philosophy that has served me best in life so far and it is Kant's categorical imperative. If I am serious about it, I have to extend it to the hypothetical case of an AI within society.



#499
AlanC9

AlanC9
  • Members
  • 35 661 messages

Uh... read my post?


I did. It didn't make sense. Or rather, it wasn't clear who the "best" applies to. Not the actual people living in the cycle, obviously.

#500
Sylvius the Mad

Sylvius the Mad
  • Members
  • 24 111 messages

And here is a perfect example for the first problem. When preparing your first strike, when do you make the distinction between it being necessary and you finding excuses for it?

Is there a difference?