wiggles89 wrote...
Nightwriter wrote...
Most of life requires you to make assumptions in order to function.
You must assume, reasonably, that other people exist, and that they are human, and sentient.
Take it up with Searle's standard, not me.
Everyone should assume it. Solipsism is lame.
wiggles89 wrote...
I've never claimed that it could. The argument states that if an A.I. behaves as though it's understanding what it's saying that doesn't mean that it is. What I have argued is that this is too high a standard because when applied to humans we can't be sure, in many cases, humans truly understand anything.
A human being can tell you they understand it. They can say they understand calculus and then perform calculus to prove it. They can talk about calculus and answer infinite questions about calculus. They can tell you how they feel about calculus.
What you're talking about is whether or not all this proves they're sentient, since understanding is the function of a sentient mind. I think it does.
wiggles89 wrote...
Once again, that's why I said under most circumstances we cannot identify a simulation. If the A.I. is incapable of understanding jokes then we know it's incapable of understanding jokes. Ok, got that. But does that say anything about it truly being able understand anything else, or its ability to have subjective experiences? I don't think it does.
Jokes are a gateway example. If we discover the computer cannot understand jokes, we will begin to think of other things it may not understand that might prove it's a computer.
As it continues to fail, we realize it's a computer, thereby invalidating every valid answer it gave before and revealing it never knew what it was talking about even when it looked like it did.
Once you see it doesn't understand jokes, you realize it's a computer, and then realize it doesn't understand anything.
wiggles89 wrote...
The issue isn't whether the A.I. can resemble a human; the issue is whether it truly understands what I'm or it's talking about.
It's ability to resemble a human convincingly is what supposedly proves it understands what you're talking about.
wiggles89 wrote...
Jokes, as I've shown, are irrelevent. So, once again, I would like you to tell me how, given Searle's standard, I can truly verify that my friend understands what I'm saying because he's human. To make it easier, let's set a proposition:
I propose to you that dogs are better than cats.
He then goes on to disagree, arguing that that cats are better than dogs. How can I tell, given Searle's standard, that he truly understands anything he's saying?
... Computers don't have opinions?
They can simulate opinion but it's never the same.
For instance, as an internet debater you know that when two people fight online the conversation will become continually more hostile as each side becomes gradually more irritated with the other. Inevitably both sides will begin questioning the intelligence of the other, becoming offended, responding more angrily. This is an example of a pattern of human behavior that a computer could never simulate effectively.
Ask a computer: what was your first crush? How did you feel about it? What do you think happens after death? Draw me a picture of what hope looks like to you and email it to me.
Modifié par Nightwriter, 28 juillet 2010 - 07:18 .