CrutchCricket wrote...
I think your definitions are still being too exacting and not giving enough credit to AI development. AIs are more than just machines or software, they are actual lifeforms albeit synthetic ones. Obvious examples besides EDI ares the geth (though they would be a more interesting case to study in a debate like this due to how they function). No way to process emotions? Organic life forms are largely mechanical in the sense that most of what comprises us is a physical system, subject to physical laws. AIs are nothing but processing power. The ability to feel, to think is done with only what 20-30% of our being? An entity that is purely mind, or purely software could potentially process emotions with the entirety of its being- really AIs should be able to experience emotions much deeper than any organics. See Legion's speech about hardware vs software. There are many routes I could argue this (including a philosophical examination of perception- Berkeley anyone?) but I don't have the time right now for all of them.
I also wouldn't take Mordin's quote too literally. Remember he Sherlock-scans everything and does it out loud. "Simulated emotional inflections"- is only a inference in the middle of his train of thought- it's not his conclusion. He has no reason to presuppose an AI but more reason to assume a VI. And recall Maelon: "You always had trouble dealing with facts that didn't fit with your theories". Mordin it seems sometimes makes the cardinal mistake Holmes always warns against- constructing a theory before you have all the facts. Even all this aside, Mordin may not think much of AI and he might carry a biological bias- which many people do.
You, sir, are a worthy opponent! En garde! (Haha, I jest, I jest.)
But no, seriously...we could get into that perception debate and be there FOREVER. I'd rather not go into it, either, but I will say that just because something was programmed to act a certain way so that we'd percieve it that way, doesn't mean it's actually "feeling" anything. EDI was most likely programmed to simulate emotion, so that we could connect to her as human beings...but was she programmed with actual emotions? I don't think so, mostly because I don't even know if it's possible for an AI to feel anything in the first place. Having emotions is something that is associated solely with the human condition, as a result of our evolution out of a dangerous existence. Then again that is yet another debate.
I will lend credence to your argument that AI's are capable of being so much more than just a slave to their programming, and have a lot of processing power in comparison to humans. However, we humans were built to have, or evolved to have, emotion. AI's, not necessarily. Just because an AI might have the processing power and capability to understand or even have emotion, doesn't mean that it has the necessary programming (or hardware) to do so. I don't think AIs "naturally" have emotions...it's kind of counter-intuitive. intelligence does not necessarily mean the capacity or ability to process emotion (my ex-boyfriend would be a perfect example <_<). It's kind of like my graphics card...it has the processing power to run ME2, in theory, but the minute ME2 starts up on my computer, it dies. Why? Because the game wasn't designed to run on an Intel graphics card, which is what mine is. My card is integrated, so I can't replace it without damaging the rest of the system, so I can't make my computer able to play ME2. So, even if we can suspend our disbelief here and say that we could give AI's legitimate emotions...not just make other people think they do...would we be able to give them that gift? Would they know what to do with it?
Case in point: the emotion chip from Star Trek: The Next Generation. It is a device that is designed to try and make Dr. Soong's androids be able to process and legitimately have emotions...whereas they might have smiled before to give the illusion they were happy, they can now smile because they are genuinely happy. When we first see it, Doctor Soong, Data's creator, is trying to give it to Data instead of Data's twin, Lore, because Lore's a sadistic piece of work and Soong thinks Data will use it better. However, Lore gets the chip through his usual subterfuge and it basically drives him mad. Because of this, Data isn't sure if he should be able to have the ability to feel emotions...if it would be too much for him. When he does decide to install it, it causes him some problems. Eventually, he does integrate it into his systems, but he rarely uses it. Why? Well, we're never given a reason why, but seeing as how the Borg queen was so very easily able to manipulate them in Star Trek: First Contact, he probably decided it was way more advantagous not to use them. We saw almost no mention of his emotion chip in the two movies following First Contact, and the fact that he was so easily undermined because of his emotions in that movie is my leading theory into why we basically never saw him "switch on" his emotions again. Does that mean he never did? No. But he did use it a lot less...we'd have seen him use it again if he'd resumed using it as he did in Generations and First Contact.
Another reason why AIs are so endearing in the sci-fi world is that they don't really understand organic species, because they legitimately cannot feel emotion, and therefore don't understand why we find certain things humorous, or why we cry out of joy instead of strictly out of sadness...that sort of thing.
If I made a computer as smart as I could, and even gave it the ability to think for itself and evolve...could I come to consider it a friend? Sure. But me being paranoid won't discount the possibility that it could betray me if it thought it was logical to do so. However, I would expect it to be able to develop a certain sense of loyalty, purely because I would become comfortable and predictable to it. But, any "emotion" I program into it would merely be a mimmick of what organic species are capable of...we have a unique chemistry that causes us to do what we do, emotionally - horomones are one such example - that AIs simply lack. Could we re-create that? Could AIs learn to re-create that and possibly be able to legitimately feel things and be human a la the Cylons of Battlestar Galactica? Sure. I don't discount that possibility in the least. But, here's the thing...I don't think human engineering as of the 22nd century is advanced enough (reverse-engineered Reaper tech is still reverse-engineered by humans) for us to truly be able to play God and create AIs that are emotionally, mentally, physically, and psychologically on par with humans, or even more than that. You say I don't give enough credit to AI development, I say you give too much. It's a technology that has wildly complex implications...I don't think we can pretend to know every variable.
Then again, is the term "AI psychology" a paradox? Perhaps. If BioWare fleshes EDI out, I sure hope they address that one.
Edit: You're very welcome, HolyAvenger. You deserve it!
Modifié par CDRSkyShepard, 30 décembre 2011 - 05:26 .





Retour en haut






