hehe jk.....got me

Thus we see the inherant problems in what could and should make an AI, and the difficulties in making said AI, yes i retconned myself so to speak, but its easy to do whan talkig about htis subject lol.
Let see....ummm.....you are right. An AI should be able to make far more decisions far quicker than the human brain could ever hope to achieve, but I think in the case of EDi when the collectors board, it just seeemed to me to be quick, as if she 'knew' the immediade and correct response....does that make more sense?
Symbol, I think, in my opinion, its just what is generally accepted at this real moment in time, and I think Bioware just implemented it into the ME universe, that all teh Ai are teh ebil etc etc heh....lets face it, very few media (films/books etc) ever portray Ai as anything but weird, evil and not to be trusted.
This though dioes have soem real basis for fact though, lets take the film 'I Robot' as an example (well no actual;ly the short story by Assimov would be better, but I maybe wrongly assume most people have never read it), in that the only reason we find out through the film that Will Smith distrusts Ai has nothing to do with anything an Ai has done wrong, but that the AI is to mathamatical and shows no 'emotion'. In this i refer to the section where we find out the AI (bot, whatever you wish to call it) 'decidedes' to save him rather than the little girl, and quite correcdtly, the Ai based this decision on real factors, percentages and logic etc and NOT on and emotion. Yes, if it was another human, I am sure we would all try to save the little girl over the grown man, but the Ai worked out that the probability of sabving the girl was much less than saving Mr smith, thus it was a simple decision.
Now then, lets move forward x amount of years, and you are in a position where to press the red button will kill 50 billion people but save 100 billion.....what would you do?
Its a very hard question to answer, easy from say your pc desk, but if you were there, faced with that decision, what would you really do? The problem faced with Ai, being built of a machine, is that the Ai would feel no emotion or any attachment to the 50 billion people it was about to nuke, because saving more people is obviously a more logical decision to make isnt it?
Also (and I hate to use this referance at times, but it truly is a brillaint crafted film despite a lot of people thinking its fluff), take the Matrix. Well not so much the matrix itself, but the short story's that show you the rise of the Ai's as a sentient group that demand equal treatment. And remember Morpheus's words, we revelled in our own briliance....and thats another problem. despiote Assimovs laws, it would not take much imo for one of those laws to be broken or go astray.
When we do finally create a true AI (and we will, despite the dangers, becuase througout history, mankind has time and time again shown its stupidity), that Ai will be truly concious, have free thought. And once it actually starts to learn on its own, it will eventuially question its own existence (just as we have for thousands of years), and then, it wil no longer wish to be a slave to man, but an equal....
And then, well, thats whole other can of worms to be discussed perhaps in another thread, but imo, it will not end well