That's not sentience, at all, It's a couple of poorly constructed false equivalences. The degree to which EDI's and the geth's "preferences" are even their own is nonexistent. They're programmed directives, nothing more. They lack the sensory hardware necessary to achieve consciousness.
What do you base this argument on? If that was the case, they would be classified within the ME universe as VIs, not AIs. Being an AI is per definition the result of a dynamic self-adapting sytem in the ME universe. Besides, how are our preferences and pleasures not at least partially hard coded into our cognitive systems? I am saying you are yet again constructing a false distinction.
Oh and by the way, of course, EDI as well as the geth have sensory hardware. I don't know how anyone could deny that.
not really. Their "bodies" are hardware tools not fundamentally different from the relationship a human has with their hammer or wrench.
Their bodies are adaptable and dynamic. So are ours. So what do you base the difference on specifically?
EDIT: Ah, sorry, I think I misunderstood you there. Well, yes, for individual platforms of the geth, this might be true. Still, the program needs to run on a hardware that needs to be maintained, so they do have a connection to the real world (and even if they did not, that doesn't really change much). Sure, their relation to their hardware may be different from ours to our body (I actually acknowledged this already in that alst post) but that doesn't preclude them being sentient or sapient.
In e.g. EDIs case (who - at least before the weirdness in the Citadel DLC - couldn't exist without her computer core/blue box on the Normandy), the ship was her body (she even states this), so I don't really see that much distinction there.
The non-existence of the flying spaghetti monster is also not provable, doesn't mean that the concept isn't completely idiotic according to all modern science and not worthy of any serious consideration. Unless you can come up with another structural means of attaining consciousness, the idea of toaster personhood will remain in the realm of fantasy magic akin Tolkien's sentient trees.
You are being hyperbolic but just to indulge you, if the ME universe had a giant spaghetti monster, we could discuss the moral implications of it (in the hypothetical context of the ME universe of course). However it does not, it has AIs which is what we are discussing. The point is, we are discussing the implications of a hypothetical scenario as described in a work of science fiction. If you want to counter by saying "minds without bodies can't have their own motivations" than I can ask "how do you know?", especially if the discussed work stipulates that they do.
An irrelevant point because the type of software code structure used has little to do with sentience and consciousness. If there is no necessary interaction between the brain and the body, there is no consciousness.
No, it has everything to do with it. Our brains have the capability to adapt and function in a way that we perceive as sentience mainly because of their incredible potential for neuronal plasticity. Do you think it just pops into existence? A self adaptive code (probably something like a software simulated neural network) would therefore be an absolute requirement. It would be horribly inefficient (which may not be an issue given powerful enough hardware) but not beyond possibility. I am sorry of you cannot imagine the concept but almost all research in the area indicates the theoretical possibility. I recommend this book for further reading (ha, had no idea this was online).
The ME universe actually makes hardly any non-arbitrary distinction between the two. It frequently refers to the geth as both, for instance.
As I said, it's true that the writing is sometimes inconsistent. However, I have never encountered the geth being classified as VI, please show a quote.
Tali's description of the geth is technobabble nonsense, and she conflates sapience and sentience as the same thing every other word. Trying to make sense of any of that garbage is about as meaningful as discussing the scientific plausiblity of Asimov's positronic brain.
Technobabble doesn't necessarily mean nonsense, if done well it can actually make sense. In Tali case (and only in ME1), I'd say it's half and half. It doesn't make sense entirely but she admits that she is oversimplifying and with a few very sensible assumptions, I think one can make it sensible (but that's another topic). In any case, as I said, the specifics of ME's shortcomings in consistency do not really impact the broader argument as far as I can see.
I am yet to see a coherent scientific supportive argument for the creation of AI that includes more than just positive thinking, I suppose that time will tell.
(if you are familiar with something like this I will be interested in reading it)
I completely agree that creating a true AI can be very dangerous and I am not advocating for doing that for the sake of just doing it. However, I think a lot of people have a very flawed understanding of how the creation of an actual AI would probably take place. It seems to me that most people - the ME authors included - seem to think that it will just pop up, being a run of the mill computer one day and a super intelligence the next. This is very likely not going to be the case (check out the link above). A true AI would not be "coded". The code would just lay down the framework, the potential if you will, to learn and build onto that. Everything else, an AI would still have to learn just like a human would. Early versions would probably be severely limited by both software and hardware. This is not something that will happen over night and if we ever get that far, there will probably be a lot of public discussion of what is sentience/sapience, where we draw the threshold for applying our different moral standards, etc. I predict it would/will be a very interesting time.
But we were not really talking about that before, we were talking in the hypothetical context of the ME universe where AI are a fact of life already.