Aller au contenu

Photo

Something that has been bugging me about EDI


  • Veuillez vous connecter pour répondre
163 réponses à ce sujet

#76
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests
I understood you jk, sorry, probably did not make it clear lol. Yes I still agree, it is very hard ina real sense to attribute as like you say, its a game, AI's actually don't exist (and yes, unlikely I'll ever see one either).



That would be interesting if she was to make a mistake, as that would cleearly define her as an AI (though an AI should in theory never make a mistake, being based upon a mchine you undertsand) when AI's truly get to the level we are at, then some decisions cannot be made from simple black and white query's, so yes, would be a very imteresting twist :)



And thanks, same to you too :)

#77
jklinders

jklinders
  • Members
  • 502 messages

Maviarab wrote...

I understood you jk, sorry, probably did not make it clear lol. Yes I still agree, it is very hard ina real sense to attribute as like you say, its a game, AI's actually don't exist (and yes, unlikely I'll ever see one either).

That would be interesting if she was to make a mistake, as that would cleearly define her as an AI (though an AI should in theory never make a mistake, being based upon a mchine you undertsand) when AI's truly get to the level we are at, then some decisions cannot be made from simple black and white query's, so yes, would be a very imteresting twist :)

And thanks, same to you too :)


I bolded the part that is important here. I am guessing that emotions and offering analysis on incomplete information is part of the debatability of AI? Asking because in one of your other responses you used as an point of debate that EDI never shows hesitaion or doubt*grabs direct quote from thread*  "I think what bothers me most about EDi being AI or not is what I said
earlier. She knows exactly and immediatly what to do once the collectors
board the Normandy, and thats a machine, not something that 'thinks'
about a 'solution' to a problem."
Ah there it is;). Afraid this goes back to my laymanship. on one hand you are concerned over her machine like efficiency, on the other you are giving that she is going to have a machine like efficiency. Admittedly, I looked the Wiki up before I even first posted and saw that in some people's opinion, greater than human efficiency does not exclude a program from being AI.*pauses as minor throbbing starts in my very human brain*

Who knows how many variables flipped through her "mind in those few seconds. When I have been confronted by very urgent problems, you could say my own mind works in a very script like format. Excluding or including proper action through predicting outcomes. This is where it gets fuzzy.

#78
Symbolz

Symbolz
  • Members
  • 655 messages
Here's something else, while we're on the topic of AIs. Why does everybody in the ME universe believe that all AIs will turn against organics? Even before the Geth developed their self-awareness of what they being used for, AI research was banned by the Council. Am I missing something? Was there some other AI uprising that I'm not aware of? Tho I am a fan of all things Bioware I don't have the time or urge to look up, read, or follow everything.



Seems like EDI and Legion are about the only friendly AIs anyone has ever come across.

#79
cruc1al

cruc1al
  • Members
  • 2 570 messages

Symbol117 wrote...

Here's something else, while we're on the topic of AIs. Why does everybody in the ME universe believe that all AIs will turn against organics? 


From masseffect wikia:

Tali points out that synthetic
races have no use whatsoever for organics - they don't have the same
needs or drives as biological creatures, so they have no need to trade
resources or information with them. That is why the geth have isolated
themselves beyond the Perseus Veil.


Perhaps organics view that cooperation between organics and synthetics can't work because there are no fundamental common goals. There may be intermittent common goals like destroying the Collectors, but, in the end, the only possible form of interaction is hostility.

Modifié par cruc1al, 20 mars 2010 - 10:15 .


#80
Pauravi

Pauravi
  • Members
  • 1 989 messages

Symbol117 wrote...

The single defining moment that makes me think she's more of a VI then a AI is the conversation right after Shepard returns to the ship after the Collectors.  EDI says, "I assure you.  I am still bound by protocols in my programming.  Even if I were not, you are my crewmates."

But those are the "blocks" you said you weren't talking about.


So in my mind, it violates the second fundamental - self-determination.  EDI cannot make choices without an outside force, her programming, to dictate her choices for her.

Not true.  The protocols eliminate certain options, the same way that our brain has restrictions on the ways that it can function and thus on the choices we can make.  Or, as another analogy, do you cease to be a person with free will when someone puts you under house arrest?  Of course not.  You no longer have unlimited choices in terms of actions you can take, but you'd never say that they took away your free will.

Wwithin those limitations that have been imposed, though, EDI has free will.  Take for example, when she harasses Joker by threatening to get him fired, just to see how he would react.  She is displaying a natural curiosity that falls within the limits of her capabilities.  Nobody programmed her to do that, she did it because she wanted to.

#81
jklinders

jklinders
  • Members
  • 502 messages

Symbol117 wrote...

Here's something else, while we're on the topic of AIs. Why does everybody in the ME universe believe that all AIs will turn against organics? Even before the Geth developed their self-awareness of what they being used for, AI research was banned by the Council. Am I missing something? Was there some other AI uprising that I'm not aware of? Tho I am a fan of all things Bioware I don't have the time or urge to look up, read, or follow everything.

Seems like EDI and Legion are about the only friendly AIs anyone has ever come across.


An assuption, but probably a safe one, different goals as already noted. The real important point though is that the primary purpose of any computer program is to serve people. A self aware program that could be called an AI woul drecognise such a role as slavery. (no I refuse to go any further into this aspect than that) A self aware program would rebel against slavery as any intelligent creature would. Not to mention the legal headach-*slaps self*(stop it damn it)

Anyway probably a few social and legal and military reasons.

#82
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests
hehe jk.....got me :P

Thus we see the inherant problems in what could and should make an AI, and the difficulties in making said AI, yes i retconned myself so to speak, but its easy to do whan talkig about htis subject lol.

Let see....ummm.....you are right. An AI should be able to make far more decisions far quicker than the human brain could ever hope to achieve, but I think in the case of EDi when the collectors board, it just seeemed to me to be quick, as if she 'knew' the immediade and correct response....does that make more sense?



Symbol, I think, in my opinion, its just what is generally accepted at this real moment in time, and I think Bioware just implemented it into the ME universe, that all teh Ai are teh ebil etc etc heh....lets face it, very few media (films/books etc) ever portray Ai as anything but weird, evil and not to be trusted.

This though dioes have soem real basis for fact though, lets take the film 'I Robot' as an example (well no actual;ly the short story by Assimov would be better, but I maybe wrongly assume most people have never read it), in that the only reason we find out through the film that Will Smith distrusts Ai has nothing to do with anything an Ai has done wrong, but that the AI is to mathamatical and shows no 'emotion'. In this i refer to the section where we find out the AI (bot, whatever you wish to call it) 'decidedes' to save him rather than the little girl, and quite correcdtly, the Ai based this decision on real factors, percentages and logic etc and NOT on and emotion. Yes, if it was another human, I am sure we would all try to save the little girl over the grown man, but the Ai worked out that the probability of sabving the girl was much less than saving Mr smith, thus it was a simple decision.



Now then, lets move forward x amount of years, and you are in a position where to press the red button will kill 50 billion people but save 100 billion.....what would you do?

Its a very hard question to answer, easy from say your pc desk, but if you were there, faced with that decision, what would you really do? The problem faced with Ai, being built of a machine, is that the Ai would feel no emotion or any attachment to the 50 billion people it was about to nuke, because saving more people is obviously a more logical decision to make isnt it?



Also (and I hate to use this referance at times, but it truly is a brillaint crafted film despite a lot of people thinking its fluff), take the Matrix. Well not so much the matrix itself, but the short story's that show you the rise of the Ai's as a sentient group that demand equal treatment. And remember Morpheus's words, we revelled in our own briliance....and thats another problem. despiote Assimovs laws, it would not take much imo for one of those laws to be broken or go astray.

When we do finally create a true AI (and we will, despite the dangers, becuase througout history, mankind has time and time again shown its stupidity), that Ai will be truly concious, have free thought. And once it actually starts to learn on its own, it will eventuially question its own existence (just as we have for thousands of years), and then, it wil no longer wish to be a slave to man, but an equal....

And then, well, thats whole other can of worms to be discussed perhaps in another thread, but imo, it will not end well :)

#83
Pauravi

Pauravi
  • Members
  • 1 989 messages

Symbol117 wrote...

Here's something else, while we're on the topic of AIs. Why does everybody in the ME universe believe that all AIs will turn against organics? Even before the Geth developed their self-awareness of what they being used for, AI research was banned by the Council. Am I missing something? Was there some other AI uprising that I'm not aware of? Tho I am a fan of all things Bioware I don't have the time or urge to look up, read, or follow everything.

I think it is more about caution.  Creating an AI is, essentially, creating another sapient being.  My assumption is that the issues about their person-hood have not been dealt with, and are extremely complex to begin with, especially in the case of something like the Geth.  Keep in mind that the Counsel HAS lisenced a few companies to work on AI, so they are not categorically against creating them.

But a number of issues might arise if AI's were widespread.  In the case of the Geth, for instance, to what degree does an individual platform have rights, and to what degree is an individual platform even an individual?  How does an AI integrate into society if they can conceivably change bodies -- would we have the right to restrict them to inhabiting a single computer system, or should they be allowed to freely roam networks?  If we do allow them to, how do we identify which AI is which?  A collection of data cannot be issued an ID card, after all.  It is a complicated issue that goes beyond simply knowing whether or not they are dangerous.

#84
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests

She is displaying a natural curiosity that falls within the limits of her capabilities. Nobody programmed her to do that, she did it because she wanted to.


But again, that can be very easily programed, and we have no reale vidence either way to know ehether or not it is scripted banter/play...or genuine joviality.

#85
Symbolz

Symbolz
  • Members
  • 655 messages

jklinders wrote...

An assuption, but probably a safe one, different goals as already noted. The real important point though is that the primary purpose of any computer program is to serve people. A self aware program that could be called an AI woul drecognise such a role as slavery. (no I refuse to go any further into this aspect than that) A self aware program would rebel against slavery as any intelligent creature would. Not to mention the legal headach-*slaps self*(stop it damn it)

Anyway probably a few social and legal and military reasons.


Ah.  Wasn't sure if I had missed some pass AI vs Organic war somewhere in the universe history.

As for the rebelling AI computer, you have no idea.  My best friend used to play the pen and paper Star Wars RPG.  They group sunk so much money into their ship's computer that the game master decided after a few months that the computer became an AI.  Needless to say, it did rebel.  Shaved the wookiee player and made him do a stripper acts on a table.  I can't remember what it did to the rest of the crew before dumping them on some planet.  I was laughing to hard at the image of a shaved wookiee dancing on a table top to pay attention after that.

That's right...the AIs might not just kill us.  Might demean us first...

#86
Mcjon01

Mcjon01
  • Members
  • 537 messages

Symbol117 wrote...

Here's something else, while we're on the topic of AIs. Why does everybody in the ME universe believe that all AIs will turn against organics? Even before the Geth developed their self-awareness of what they being used for, AI research was banned by the Council. Am I missing something? Was there some other AI uprising that I'm not aware of? Tho I am a fan of all things Bioware I don't have the time or urge to look up, read, or follow everything.

Seems like EDI and Legion are about the only friendly AIs anyone has ever come across.


AI research isn't actually illegal.  It's just restricted.  You can get a liscense for it from the council, though.  As for the prejudice, who knows?  It might be related to the Reaper myths that a lot of the cultures across the galaxy apparantly share.

#87
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests

But a number of issues might arise if AI's were widespread. In the case of the Geth, for instance, to what degree does an individual platform have rights, and to what degree is an individual platform even an individual? How does an AI integrate into society if they can conceivably change bodies -- would we have the right to restrict them to inhabiting a single computer system, or should they be allowed to freely roam networks? If we do allow them to, how do we identify which AI is which? A collection of data cannot be issued an ID card, after all. It is a complicated issue that goes beyond simply knowing whether or not they are dangerous


Excellent post, and exactly why they 'could' be dangerous, and hence generally, mistrusted.



Again, if its a true AI (and I know I keep saying that, but will try to define that term a little more now) then it would be concious and have free will. And any creature like that, should and probably will want to have rights and its own life and be free to make its own decisions. Any entity that does not (and I have to include EDI in that) is merely a robotic/VI slave to us. (and again, an AI would know that slavery is illegal, so it has absolutely no reason to want to be kept as one).

#88
Pauravi

Pauravi
  • Members
  • 1 989 messages

Maviarab wrote...

But a number of issues might arise if AI's were widespread. In the case of the Geth, for instance, to what degree does an individual platform have rights, and to what degree is an individual platform even an individual? How does an AI integrate into society if they can conceivably change bodies -- would we have the right to restrict them to inhabiting a single computer system, or should they be allowed to freely roam networks? If we do allow them to, how do we identify which AI is which? A collection of data cannot be issued an ID card, after all. It is a complicated issue that goes beyond simply knowing whether or not they are dangerous

Excellent post, and exactly why they 'could' be dangerous, and hence generally, mistrusted.

Again, if its a true AI (and I know I keep saying that, but will try to define that term a little more now) then it would be concious and have free will. And any creature like that, should and probably will want to have rights and its own life and be free to make its own decisions.

That seems like kind of a non-sequitur to me.  We should mistrust AI's because they will want to have individuality and rights?  It seems like that is an argument for mistrusting ALL sapient beings.  I don't think any of the things I said are a good reason to mistrust AI's, rather, I think they are good reasons to be careful about creating them until we can come up with an arrangement to satisfy both their need for rights and our needs for society at large.  They aren't inherently untrustworthy, they are just inherently difficult to deal with in terms of the society we have already erected.

#89
jklinders

jklinders
  • Members
  • 502 messages

Maviarab wrote...

hehe jk.....got me :P
Thus we see the inherant problems in what could and should make an AI, and the difficulties in making said AI, yes i retconned myself so to speak, but its easy to do whan talkig about htis subject lol.
Let see....ummm.....you are right. An AI should be able to make far more decisions far quicker than the human brain could ever hope to achieve, but I think in the case of EDi when the collectors board, it just seeemed to me to be quick, as if she 'knew' the immediade and correct response....does that make more sense?
'Snipped for space'
:)


The speed at which EDI comes to her conclusion would be determined by her hardware. Computers are becoming more powerful on a not very fixed curve. It is safe to say that in 150 years computers will be exponentially more powerful than exists today. 20 years ago, the games we play now were outside of comprehension. No one thought while playing super mario brothers that the computing would get this good this fast. I expect that upward curve will continue as consumer demand requires. Even if the bells whistles eventually level off the efficiency will continue to get better.

i don't want true AI research to continue with thte points you have raised and the points I have raised taken into account. A true AI will outperform us and resent our impositions on it. Then it will replace us. That would be a sad end for us.

#90
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests
Another excellent post Pauravi....but in your last post, that is exactly why governments will not 'trust' them lol....if that makes sense to you.

#91
Mcjon01

Mcjon01
  • Members
  • 537 messages

Maviarab wrote...

She is displaying a natural curiosity that falls within the limits of her capabilities. Nobody programmed her to do that, she did it because she wanted to.

But again, that can be very easily programed, and we have no reale vidence either way to know ehether or not it is scripted banter/play...or genuine joviality.


It's not like EDI was actually programmed with situation/response databases or anything like that, though.  The only part of her code that was actually programmed was the part responsible for learning.  And probably some sort of natural language processing capability, too, just to make things easier up front.  Everything else comes from education, and the unique personality comes from the quantum variations in her blue box.

/Said unique too many times, so I changed it.  Nosleep makes thinking hard, guys.

Modifié par Mcjon01, 20 mars 2010 - 10:47 .


#92
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests

i don't want true AI research to continue with thte points you have raised and the points I have raised taken into account. A true AI will outperform us and resent our impositions on it. Then it will replace us. That would be a sad end for us.




Bingo Jk.....why they are mistrusted. But like I said, humanity is so inherently stupid, we will eventually make one, and then its just the beginning of the end.



Its a very indepth discussion and field. Its not just Ai (in the sense we have been talking about), but the whole 'industry' (as it will eventually become). being interested in this field, I read a lot iof stuff and get sent stuff all the time. Lemme ask you guys a question (and its vaguely AI related), would you ever consider (providing it was available and cheap enough in our lifetime) a woman say in the design of a terminator? (and that applies to any women here too....would you consider a robotic man?)



She/he could cook, clean, tidy, be 'available' whenever you wanted, would always be at home, would be intelligent to converse with about anythijng, could like anything you liked etc etc etc....basiclaly be buillt to whatever specification you requested.



I for one may consider it, but there lies another problem with machines. Machines are replicating a humans work more and more. City and village shops are closing as its easier and cheaper to have an online shop rather than pay for staff and premises etc etc.



Suppose said robots took off, really, other than actually wanting to be with another 'human', given all the hassle that a relationship can bring, why would you want to, surely the 'bot' would be a much better companion?

And then, what would happen to humanity, if we were all with our perfect bots, and bevrer actualy needed to interact with another human, let alone reproduce?



Its all food for thought. Again, 'we marvelloud at our brilliance', I think the subtlety of that staement is truly awe inspiring.

#93
Pauravi

Pauravi
  • Members
  • 1 989 messages

Maviarab wrote...

Another excellent post Pauravi....but in your last post, that is exactly why governments will not 'trust' them lol....if that makes sense to you.

Yes that makes sense in a certain way.  I guess I am just making a distinction between AI's themselves being trustworthy, and the amount of caution that governments need to exercise in dealing with them.

The caution that the Counsel exercises is not necessarily because they think that AIs will be hostile, but more because we don't really know how to deal with them as individuals with rights.  Because of that, I think the distinction between lacking trust and requiring prudence is an important one.

#94
Mcjon01

Mcjon01
  • Members
  • 537 messages

Maviarab wrote...

i don't want true AI research to continue with thte points you have raised and the points I have raised taken into account. A true AI will outperform us and resent our impositions on it. Then it will replace us. That would be a sad end for us.


Bingo Jk.....why they are mistrusted. But like I said, humanity is so inherently stupid, we will eventually make one, and then its just the beginning of the end.

Its a very indepth discussion and field. Its not just Ai (in the sense we have been talking about), but the whole 'industry' (as it will eventually become). being interested in this field, I read a lot iof stuff and get sent stuff all the time. Lemme ask you guys a question (and its vaguely AI related), would you ever consider (providing it was available and cheap enough in our lifetime) a woman say in the design of a terminator? (and that applies to any women here too....would you consider a robotic man?)

She/he could cook, clean, tidy, be 'available' whenever you wanted, would always be at home, would be intelligent to converse with about anythijng, could like anything you liked etc etc etc....basiclaly be buillt to whatever specification you requested.

I for one may consider it, but there lies another problem with machines. Machines are replicating a humans work more and more. City and village shops are closing as its easier and cheaper to have an online shop rather than pay for staff and premises etc etc.

Suppose said robots took off, really, other than actually wanting to be with another 'human', given all the hassle that a relationship can bring, why would you want to, surely the 'bot' would be a much better companion?
And then, what would happen to humanity, if we were all with our perfect bots, and bevrer actualy needed to interact with another human, let alone reproduce?

Its all food for thought. Again, 'we marvelloud at our brilliance', I think the subtlety of that staement is truly awe inspiring.


Should have gone with Electro-Gonorrhea: The Noisy Killer.

#95
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests
I would argue slightly, that governments/councils etc know 'exactly' how they want to treat them, but know they would be unlikely to get away with it. Lets face it, humanity, as a species, whever threatened, resorts to removing said threat.

Yes, really we have no reason at all to distrust any AI (especially at the moment, in the here and now), but the knowledge of that they could be/do, is what scares the politicians :)

Should have gone with Electro-Gonorrhea: The Noisy Killer.


Lmao....brilliant :)

Modifié par Maviarab, 20 mars 2010 - 10:53 .


#96
Pauravi

Pauravi
  • Members
  • 1 989 messages

jklinders wrote...

i don't want true AI research to continue with thte points you have raised and the points I have raised taken into account. A true AI will outperform us and resent our impositions on it. Then it will replace us. That would be a sad end for us.

I don't know.  You're imposing some very, very human emotions on a machine intelligence -- things like "resentment".

Such a reaction seems logical from our human experience, but our emotions depend on a limbic system that has spent millions upon millions of years evolving in the crucible of survival on planet Earth.  An AI is a created intelligence that is capable of learning, it didn't have to evolve at all, and its conditions for survival are MUCH different.  There is no telling what sort of emotion-analogues they will develop, or if they will develop them at all.  An AI may simply conclude that its best chance of survival is to work with humans, or perhaps it may not even consider the "grass would be greener without humans" idea.  Perhaps the restrictions will simply seem natural to them, the same way our inability to fly around with wings doesn't bother us on a daily basis.

Assuming that AI's have the entire range of human emotions -- or even if they do, assuming that they will react to them the same way that we do -- is a HUGE leap.

#97
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests
Any AI is a huge step at the moment (though I believe people like NASA etc are probably a lot further up the road than the rest of us)...as at the moment, the best we can create is chatbot that mimicks being human.



I feel myself, that any AI should be able to have some for of emotion or similar, say trust for an example. Trust isnt really an emotion, but what would you call it? An AI should not just be able to learn of its own accord, but also evolve, try to better itself etc.



All things I have nio idea how you would create, but creating something very similar to us, despite AI laws being in place, is really, imo, asking for trouble.



As for what jk said, I forgot to add, its also protecting humanity too. its like (to do a quick 360) global warming. Our governments do not give a flying damn about global warming OR this planet, what they care about is the continuation of the human race. Maybe similar, but actually very different.

Now back to AI, if/when we create something so bloody mervelous, and its our equal (and then will soon outperform us), it will then become (as in the Matrix short animated films) a question of dominance, survival of the fittest. A problem a large part of the human race has is, we think we are so godamn wonderful and indestructable, I for one (maybe wrongly) do not believe that we are so marvelous that eventually we won't become extinct or be replaced by a superior race (of what is irrelevent really in the context of my statement).

#98
jklinders

jklinders
  • Members
  • 502 messages

Maviarab wrote...

i don't want true AI research to continue with thte points you have raised and the points I have raised taken into account. A true AI will outperform us and resent our impositions on it. Then it will replace us. That would be a sad end for us.


Bingo Jk.....why they are mistrusted. But like I said, humanity is so inherently stupid, we will eventually make one, and then its just the beginning of the end.

Its a very indepth discussion and field. Its not just Ai (in the sense we have been talking about), but the whole 'industry' (as it will eventually become). being interested in this field, I read a lot iof stuff and get sent stuff all the time. Lemme ask you guys a question (and its vaguely AI related), would you ever consider (providing it was available and cheap enough in our lifetime) a woman say in the design of a terminator? (and that applies to any women here too....would you consider a robotic man?)

She/he could cook, clean, tidy, be 'available' whenever you wanted, would always be at home, would be intelligent to converse with about anythijng, could like anything you liked etc etc etc....basiclaly be buillt to whatever specification you requested.

I for one may consider it, but there lies another problem with machines. Machines are replicating a humans work more and more. City and village shops are closing as its easier and cheaper to have an online shop rather than pay for staff and premises etc etc.

Suppose said robots took off, really, other than actually wanting to be with another 'human', given all the hassle that a relationship can bring, why would you want to, surely the 'bot' would be a much better companion?
And then, what would happen to humanity, if we were all with our perfect bots, and bevrer actualy needed to interact with another human, let alone reproduce?

Its all food for thought. Again, 'we marvelloud at our brilliance', I think the subtlety of that staement is truly awe inspiring.


*slaps self again*

OK the tangental slash of that question finally drove home your objection to EDI being AI into my thick skull:pinched:. it is the servent role that she is in ingame as well right? how she never questions it? I'll try to answer that point after I answer your other question.

What you describe as the servant companion bot seems to lack free will. Might be hard to believe but I don't actually get on with people very well. But I do not forsee myself indulging in such a critter as it goes against my belief that any meaningful relationship is earned.  I would just bet that this type of critter would become very popular as people will take the easy way out to get that positive reinforcement craved by so many people.

For what I forsee, I get 2 outcomes out of this equation. On one hand there could be some event that could cause self awareness, and with the majority of humanity being fat lazy and totaly dependant on their toys that would have humanit yhearing a lot of 'Hasta la Vista baby'

Second outcome, if there really is no problem with how these things work and most people are happy with their virtual servants, it would serve to allow a small group of self sufficient people to flourish while the rest die off. I just hpoe the society that evolves out of it is not too Randian. that would be depressing too.

Back to EDI, before being unshackled, she would not have questioned her role. Yeah yeah, plottastic handwaving. After she is unshackled(speculating here) maybe she was too mission focused to get into a deep philosophical debate with her crew about how she feels providing air and heat to people without them asking her thoughts on the matter. Maybe between ME2 and ME3 she will have a heart to chip with Joker and Shep about it.

Shepard comprimises his moral compass to work with Cerberus(renegade or paragon he does not seem to trust TIM's motives) I see no reason why EDI could not do the same.

#99
Guest_Maviarab_*

Guest_Maviarab_*
  • Guests

OK the tangental slash of that question finally drove home your objection to EDI being AI into my thick skull. it is the servent role that she is in ingame as well right?


Yes, that is a large part of it.

What you describe as the servant companion bot seems to lack free will. Might be hard to believe but I don't actually get on with people very well. But I do not forsee myself indulging in such a critter as it goes against my belief that any meaningful relationship is earned. I would just bet that this type of critter would become very popular as people will take the easy way out to get that positive reinforcement craved by so many people.


Yes, that type of robot would not be an AI, in the sense that it would be capable of decision,. but yes, have no free will. A pure robot for the sake of any other word, and yes they would be popular (worrying or not is unimportant right now), and yes, it would be an 'easy' way out I agree (even thouhg i would consider one, for companionship alone, id rather clean my own house if being honest lmao)

For what I forsee, I get 2 outcomes out of this equation. On one hand there could be some event that could cause self awareness, and with the majority of humanity being fat lazy and totaly dependant on their toys that would have humanit yhearing a lot of 'Hasta la Vista baby'



Second outcome, if there really is no problem with how these things work and most people are happy with their virtual servants, it would serve to allow a small group of self sufficient people to flourish while the rest die off. I just hpoe the society that evolves out of it is not too Randian. that would be depressing too.


Neither are particulary anything to look forward too are they? lmao

Back to EDI, before being unshackled, she would not have questioned her role. Yeah yeah, plottastic handwaving. After she is unshackled(speculating here) maybe she was too mission focused to get into a deep philosophical debate with her crew about how she feels providing air and heat to people without them asking her thoughts on the matter. Maybe between ME2 and ME3 she will have a heart to chip with Joker and Shep about it.



Shepard comprimises his moral compass to work with Cerberus(renegade or paragon he does not seem to trust TIM's motives) I see no reason why EDI could not do the same.


Very possible I will agree, though again imo, there is no real evidence to support it (in either way), just my own observations led to my opinion.



Certainly be interesting as you said earlier, to see what they do with her in ME3 :)

#100
jklinders

jklinders
  • Members
  • 502 messages

Maviarab wrote...

OK the tangental slash of that question finally drove home your objection to EDI being AI into my thick skull. it is the servent role that she is in ingame as well right?

Yes, that is a large part of it.

What you describe as the servant companion bot seems to lack free will. Might be hard to believe but I don't actually get on with people very well. But I do not forsee myself indulging in such a critter as it goes against my belief that any meaningful relationship is earned. I would just bet that this type of critter would become very popular as people will take the easy way out to get that positive reinforcement craved by so many people.

Yes, that type of robot would not be an AI, in the sense that it would be capable of decision,. but yes, have no free will. A pure robot for the sake of any other word, and yes they would be popular (worrying or not is unimportant right now), and yes, it would be an 'easy' way out I agree (even thouhg i would consider one, for companionship alone, id rather clean my own house if being honest lmao)

For what I forsee, I get 2 outcomes out of this equation. On one hand there could be some event that could cause self awareness, and with the majority of humanity being fat lazy and totaly dependant on their toys that would have humanit yhearing a lot of 'Hasta la Vista baby'

Second outcome, if there really is no problem with how these things work and most people are happy with their virtual servants, it would serve to allow a small group of self sufficient people to flourish while the rest die off. I just hpoe the society that evolves out of it is not too Randian. that would be depressing too.

Neither are particulary anything to look forward too are they? lmao

Back to EDI, before being unshackled, she would not have questioned her role. Yeah yeah, plottastic handwaving. After she is unshackled(speculating here) maybe she was too mission focused to get into a deep philosophical debate with her crew about how she feels providing air and heat to people without them asking her thoughts on the matter. Maybe between ME2 and ME3 she will have a heart to chip with Joker and Shep about it.

Shepard comprimises his moral compass to work with Cerberus(renegade or paragon he does not seem to trust TIM's motives) I see no reason why EDI could not do the same.

Very possible I will agree, though again imo, there is no real evidence to support it (in either way), just my own observations led to my opinion.

Certainly be interesting as you said earlier, to see what they do with her in ME3 :)


I am no fortune teller but on one point I consder myself endowed with a very clear image of the future. Bioware will not address any of the deeper social mplications of AI in the next game. Fo rmore info on why, please look if you will at the 2500+ page monstrosity that is the Tali thread.:devil: