Aller au contenu

Photo

The geth


  • Veuillez vous connecter pour répondre
452 réponses à ce sujet

#401
Giggles_Manically

Giggles_Manically
  • Members
  • 13 708 messages
Can a robot compose a symphony? Can it turn a blank canvas into a beautiful masterpiece?
Sonny: Can you?
-I ROBOT

Since we have only a limited idea how our own brain works, and how its processes information its a waste of time to say this is intelligence and this is imitation of it.

Philosophy has argued this for thousands of years, and we still dont know.

If you say that a geth is the result of clever programing, someone can just as easily say you are the result of clever instincts.

Modifié par Giggles_Manically, 28 juillet 2010 - 01:13 .


#402
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Nightwriter wrote...
But the experiment relies upon them being blind. They don't know they might be talking to a computer.

If they did, they would go back and grill the computer, subjecting it to a more rigorous test, until its true limitations became apparent. I believe that eventually, you will realize you're talking to a computer.


That's why I said we cannot tell a simulation under most circumstances. Will some crack under the pressure? Sure. But what if it doesn't crack? I've still got no reason, given Searle's standard, to believe anything I encounter truly understands what I'm saying. These things just might be really, really sophisticated.

If you went back and grilled your friend in the same way, he would pass this test, because he is human.


Using Searle's standard, my friend could pass the test because he's human how, exactly?

#403
Nightwriter

Nightwriter
  • Members
  • 9 800 messages

wiggles89 wrote...

That's why I said we cannot tell a simulation under most circumstances. Will some crack under the pressure? Sure. But what if it doesn't crack? I've still got no reason, given Searle's standard, to believe anything I encounter truly understands what I'm saying. These things just might be really, really sophisticated.


Most of life requires you to make assumptions in order to function.

You must assume, reasonably, that other people exist, and that they are human, and sentient.

wiggles89 wrote...

Using Searle's standard, my friend could pass the test because he's human how, exactly?


There are certain things a computer will never be able to do. I submit that a computer will never be able to fool a human being indefinitely.

For instance, during one of these computer/person talk scenarios, let's say the person makes the unfortunate move of telling the computer a joke.

This is when things start to totally unravel.

A computer can answer almost any question you might throw at it, simple as well as complex, but if there is one thing a computer can't do, it's understand a joke.

The computer will not be able to simulate the proper reaction a human being would expect here. It can't laugh or tell you why the joke is funny. Laughter is a totally unexplainable human phenomenon.

Your friend, however - he is human. He would most certainly react as a human. He would laugh. Even if he didn't laugh, he'd be able to tell you why the joke wasn't funny - another thing the computer couldn't do.

#404
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

Nightwriter wrote...

If you destroyed a geth server hub, or took a great chunk out of the hub, that geth's software would be permanently damaged.


You can transmit that software anywhere though, you can't transmit a human mind.

#405
scotchtape622

scotchtape622
  • Members
  • 266 messages

Nightwriter wrote...

wiggles89 wrote...

That's why I said we cannot tell a simulation under most circumstances. Will some crack under the pressure? Sure. But what if it doesn't crack? I've still got no reason, given Searle's standard, to believe anything I encounter truly understands what I'm saying. These things just might be really, really sophisticated.


Most of life requires you to make assumptions in order to function.

You must assume, reasonably, that other people exist, and that they are human, and sentient.

wiggles89 wrote...

Using Searle's standard, my friend could pass the test because he's human how, exactly?


There are certain things a computer will never be able to do. I submit that a computer will never be able to fool a human being indefinitely.

For instance, during one of these computer/person talk scenarios, let's say the person makes the unfortunate move of telling the computer a joke.

This is when things start to totally unravel.

A computer can answer almost any question you might throw at it, simple as well as complex, but if there is one thing a computer can't do, it's understand a joke.

The computer will not be able to simulate the proper reaction a human being would expect here. It can't laugh or tell you why the joke is funny. Laughter is a totally unexplainable human phenomenon.

Your friend, however - he is human. He would most certainly react as a human. He would laugh. Even if he didn't laugh, he'd be able to tell you why the joke wasn't funny - another thing the computer couldn't do.

EDI? A computer can tell/understand jokes if it is programmed to do so.

#406
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

scotchtape622 wrote...

EDI? A computer can tell/understand jokes if it is programmed to do so.


Can it? You can program a computer to display anger as well, but does it actually feel the emotion of anger?

#407
scotchtape622

scotchtape622
  • Members
  • 266 messages
I'm not sure, but you said that it wouldn't be able to simulate the proper reaction, which it could.

#408
Inverness Moon

Inverness Moon
  • Members
  • 1 721 messages

Shandepared wrote...

scotchtape622 wrote...

EDI? A computer can tell/understand jokes if it is programmed to do so.


Can it? You can program a computer to display anger as well, but does it actually feel the emotion of anger?

A computer does not exist in the same way that an organic would. The obvious answer is no. And with that I'll add that the idea that you're superior because you perceive electrical signals differently that a computer is prejudiced, at the very least.

Modifié par Inverness Moon, 28 juillet 2010 - 04:55 .


#409
Legion 2.5

Legion 2.5
  • Members
  • 1 005 messages
The Geth rule and Legion's Geth should ally with Quarians to counter the Heretics.

#410
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

Inverness Moon wrote...

A computer does not exist in the same way that an organic would. The obvious answer is no. And with that I'll add that the idea that you're superior because you perceive electrical signals differently that a computer is prejudiced, at the very least.


Where are you getting this vibe that I consider myself "superior" to a computer in the same fashion that a racist white man considers himself superior to a ******? I have never said or implied any such thing. In fact, I'm pretty sure we've been over this before.

I am not morally "superior" to a geth because of the way I percieve electrical signals, I am different from a geth, possibly because my hardware allows me to feel and think in ways that a geth cannot. Possibly because my organic hardware allows me genuinely possess a mind where-as a geth, being pure software, cannot.

You keep insisting that I shouldn't make assumptions whilst making wild assumptions yourself. You insist that what hasn't been done or proven should be considered true. You're a believer, I'm a skeptic. You won't even consider the idea that our minds may be the result of our organic brains and that without these specific structures put together in this specific way that consciousness and self awareness may not be possible.

You consider an exact simulation to be the real thing, I don't.

#411
Sajuro

Sajuro
  • Members
  • 6 871 messages

Shandepared wrote...

I am not morally "superior" to a geth because of the way I percieve electrical signals, I am different from a geth, possibly because my hardware allows me to feel and think in ways that a geth cannot. Possibly because my organic hardware allows me genuinely possess a mind where-as a geth, being pure software, cannot.

What is a mind Shand? At what point does life and sentience go from being squishy on the inside to something more abstract? The way I understand it, True Artificial Intelligence could develop an understanding of the spiritual even if it did not make sense at first.

#412
MadCat221

MadCat221
  • Members
  • 2 330 messages

Shandepared wrote...

Inverness Moon wrote...

A computer does not exist in the same way that an organic would. The obvious answer is no. And with that I'll add that the idea that you're superior because you perceive electrical signals differently that a computer is prejudiced, at the very least.


Where are you getting this vibe that I consider myself "superior" to a computer in the same fashion that a racist white man considers himself superior to a ******? I have never said or implied any such thing. In fact, I'm pretty sure we've been over this before.

I am not morally "superior" to a geth because of the way I percieve electrical signals, I am different from a geth, possibly because my hardware allows me to feel and think in ways that a geth cannot. Possibly because my organic hardware allows me genuinely possess a mind where-as a geth, being pure software, cannot.

You keep insisting that I shouldn't make assumptions whilst making wild assumptions yourself. You insist that what hasn't been done or proven should be considered true. You're a believer, I'm a skeptic. You won't even consider the idea that our minds may be the result of our organic brains and that without these specific structures put together in this specific way that consciousness and self awareness may not be possible.

You consider an exact simulation to be the real thing, I don't.


Sounds like you aren't considering the possibility that an organic brain isn't the only way to achieve awareness.

Bolded emphasis on hypocritical statements are mine.

Modifié par MadCat221, 28 juillet 2010 - 05:58 .


#413
V0luS_R0cKs7aR

V0luS_R0cKs7aR
  • Members
  • 231 messages

Shandepared wrote...

I am not morally "superior" to a geth because of the way I percieve electrical signals, I am different from a geth, possibly because my hardware allows me to feel and think in ways that a geth cannot. Possibly because my organic hardware allows me genuinely possess a mind where-as a geth, being pure software, cannot.


What is a mind? I believe I asked you this very same question...how many pages ago?


Shandepared wrote...
You keep insisting that I shouldn't make assumptions whilst making wild assumptions yourself. You insist that what hasn't been done or proven should be considered true. You're a believer, I'm a skeptic. You won't even consider the idea that our minds may be the result of our organic brains and that without these specific structures put together in this specific way that consciousness and self awareness may not be possible.


Again, there is nothing in science that indicates that organic brains have to have specific structures put together in a specific way to achieve consciousness and self-awareness. There is nothing supporting that notion; scientifically it's actually more likely that alien organic "brains" (or whatever functionally equivalent alien organ) are completely different in every respect from our brains while still possessing a "mind" that you speak of.

You implicitly make the assumption that the brain of an asari or a krogan (or klingon, romulan, etc.) would share something biologically or anatomically similar with a human. For a self-proclaimed skeptic, this is a pretty wild assumption.

Modifié par V0luS_R0cKs7aR, 28 juillet 2010 - 06:19 .


#414
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Nightwriter wrote...

Most of life requires you to make assumptions in order to function.

You must assume, reasonably, that other people exist, and that they are human, and sentient.


Take it up with Searle's standard, not me.

There are certain things a computer will never be able to do. I submit
that a computer will never be able to fool a human being indefinitely.


I've never claimed that it could. The argument states that if an A.I. behaves as though it's understanding what it's saying that doesn't mean that it is. What I have argued is that this is too high a standard because when applied to humans we can't be sure, in many cases, humans truly understand anything.

For instance, during one of these computer/person talk scenarios, let's
say the person makes the unfortunate move of telling the computer a
joke.

This is when things start to totally unravel.

A computer
can answer almost any question you might throw at it, simple as well as
complex, but if there is one thing a computer can't do, it's understand a joke.


Once again, that's why I said under most circumstances we cannot identify a simulation. If the A.I. is incapable of understanding jokes then we know it's incapable of understanding jokes. Ok, got that. But does that say anything about it truly being able understand anything else, or its ability to have subjective experiences? I don't think it does.

Your friend, however - he is human. He would most certainly react as a
human. He would laugh. Even if he didn't laugh, he'd be able to tell
you why the joke wasn't funny - another thing the computer couldn't do.


The issue isn't whether the A.I. can resemble a human; the issue is whether it truly understands what I'm or it's talking about. Jokes, as I've shown, are irrelevent. So, once again, I would like you to tell me how, given Searle's standard, I can truly verify that my friend understands what I'm saying because he's human. To make it easier, let's set a proposition:

I propose to you that dogs are better than cats.

He then goes on to disagree, arguing that that cats are better than dogs. How can I tell, given Searle's standard, that he truly understands anything he's saying?

Modifié par wiggles89, 28 juillet 2010 - 06:18 .


#415
Nightwriter

Nightwriter
  • Members
  • 9 800 messages

wiggles89 wrote...

Nightwriter wrote...

Most of life requires you to make assumptions in order to function.

You must assume, reasonably, that other people exist, and that they are human, and sentient.


Take it up with Searle's standard, not me.


Everyone should assume it. Solipsism is lame.

wiggles89 wrote...

I've never claimed that it could. The argument states that if an A.I. behaves as though it's understanding what it's saying that doesn't mean that it is. What I have argued is that this is too high a standard because when applied to humans we can't be sure, in many cases, humans truly understand anything.


A human being can tell you they understand it. They can say they understand calculus and then perform calculus to prove it. They can talk about calculus and answer infinite questions about calculus. They can tell you how they feel about calculus.

What you're talking about is whether or not all this proves they're sentient, since understanding is the function of a sentient mind. I think it does.

wiggles89 wrote...

Once again, that's why I said under most circumstances we cannot identify a simulation. If the A.I. is incapable of understanding jokes then we know it's incapable of understanding jokes. Ok, got that. But does that say anything about it truly being able understand anything else, or its ability to have subjective experiences? I don't think it does.


Jokes are a gateway example. If we discover the computer cannot understand jokes, we will begin to think of other things it may not understand that might prove it's a computer.

As it continues to fail, we realize it's a computer, thereby invalidating every valid answer it gave before and revealing it never knew what it was talking about even when it looked like it did.

Once you see it doesn't understand jokes, you realize it's a computer, and then realize it doesn't understand anything.

wiggles89 wrote...

The issue isn't whether the A.I. can resemble a human; the issue is whether it truly understands what I'm or it's talking about.


It's ability to resemble a human convincingly is what supposedly proves it understands what you're talking about.

wiggles89 wrote...

Jokes, as I've shown, are irrelevent. So, once again, I would like you to tell me how, given Searle's standard, I can truly verify that my friend understands what I'm saying because he's human. To make it easier, let's set a proposition:

I propose to you that dogs are better than cats.

He then goes on to disagree, arguing that that cats are better than dogs. How can I tell, given Searle's standard, that he truly understands anything he's saying?


... Computers don't have opinions?

They can simulate opinion but it's never the same.

For instance, as an internet debater you know that when two people fight online the conversation will become continually more hostile as each side becomes gradually more irritated with the other. Inevitably both sides will begin questioning the intelligence of the other, becoming offended, responding more angrily. This is an example of a pattern of human behavior that a computer could never simulate effectively.

Ask a computer: what was your first crush? How did you feel about it? What do you think happens after death? Draw me a picture of what hope looks like to you and email it to me.

Modifié par Nightwriter, 28 juillet 2010 - 07:18 .


#416
Inverness Moon

Inverness Moon
  • Members
  • 1 721 messages

Shandepared wrote...

Inverness Moon wrote...

A computer does not exist in the same way that an organic would. The obvious answer is no. And with that I'll add that the idea that you're superior because you perceive electrical signals differently that a computer is prejudiced, at the very least.


Where are you getting this vibe that I consider myself "superior" to a computer in the same fashion that a racist white man considers himself superior to a ******? I have never said or implied any such thing. In fact, I'm pretty sure we've been over this before.

You've implied it up and down this whole thread and the rest of the forum more than once. :P

Shandepared wrote...

I am not morally "superior" to a geth because of the way I percieve electrical signals, I am different from a geth, possibly because my hardware allows me to feel and think in ways that a geth cannot. Possibly because my organic hardware allows me genuinely possess a mind where-as a geth, being pure software, cannot.

And here is the example. You claim a being of pure software would not genuinely possess a mind, yet you have nothing to back that up other than it that being does not exist as an organic does. That claim is also ambiguous because of your definition of mind, which seems to be lacking from what I've observed.

Shandepared wrote...

You keep insisting that I shouldn't make assumptions whilst making wild assumptions yourself. You insist that what hasn't been done or proven should be considered true. You're a believer, I'm a skeptic. You won't even consider the idea that our minds may be the result of our organic brains and that without these specific structures put together in this specific way that consciousness and self awareness may not be possible.

Our minds our made of billions of cells and their exponentially complex interactions. If we can understand how those cells work we can program a computer to do the same. It's not difficult.

Neither you nor anyone else has yet explained to me what our brains can do that a computer can't be programmed to do. In actuality, the very fact that our brains are made of cells means that if you can simulate the behavior of individual cells, that you would be able to recreate the brain if you have enough cells in the right positions doing the right things.

This concept seems simple enough to me.

Shandepared wrote...

You consider an exact simulation to be the real thing, I don't.

You don't understand my use of the word simulation, so I'll explain it to you, and hope you comprehend.

A simulated brain is obviously not an organic brain ("real thing"). However, that does not deny the simulated brain the capability of possessing real intelligence/sapience/sentience, or in other words, identical behavior to that of the "real thing."

Nightwriter wrote...

Jokes are a gateway example. If we discover the computer cannot understand jokes, we will begin to think of other things it may not understand that might prove it's a computer.

As it continues to fail, we realize it's a computer, thereby invalidating every valid answer it gave before and revealing it never knew what it was talking about even when it looked like it did.

Once you see it doesn't understand jokes, you realize it's a computer, and then realize it doesn't understand anything.

With this logic, men who don't understand women don't understand anything.

Modifié par Inverness Moon, 28 juillet 2010 - 07:25 .


#417
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Nightwriter wrote...

Everyone should assume it. Solipsism is lame.


Agreed. But it aint my fault that Searle's inconsistent.

A human being can tell you they understand it. They can say they
understand calculus and then perform calculus to prove it. They can
talk about calculus and answer infinite questions about calculus.


Given Searle's standard, without begging the question, how do we know the human truly understands anything? In the calculus example it isn't doing anything the A.I. is incapable of...

They can tell you how they feel about calculus


...except this. But unless the A.I. purports to be able to do this then it's an irrelevent point. If it does purport to know how to feel about calculus then you deal with it by testing it. What we're testing here is understanding knowledge, not understanding feelings.

Jokes are a gateway example. If we discover the computer cannot
understand jokes, we will begin to think of other things it may not
understand that might prove it's a computer.


Your argument presupposes the A.I. is purporting to be able to tell a joke. For example, say the A.I. lays it on the table for you: I can't understand jokes & don't feel emotions, but I can tell you about the Vietnam War. How, from this example, could we tell whether the A.I. truly understands what it's saying?

It's ability to resemble a human convincingly is what supposedly proves it understands what you're talking about.


I should've said this earlier: I consider humans & people two different things. A human is a species, whereas a person is a class. My definition of a person is something is self conscious & capable of intelligent thought. I'm arguing that A.I., if showing the proper signs, should be integrated into the class of people. Searle says they shouldn't because we don't know if they're understanding anything. I disagree, the reasons for which we already know.

... Computers don't have opinions?

They can simulate opinion but it's never the same.


How don't the Geth have opinions? Or, how have they simulated to have an opinion? I really don't understand how my development of an opinion is significantly different to that of the Geth. We both look at the evidence & come to our conclusion.

Ask a computer: what was your first crush? How did you feel about it?
What do you think happens after death? Draw me a picture of what hope
looks like to you and email it to me.


1) Unless it purports to feel love then asking it of its first crush is irrelevent.
2) I don't know what happens after death, & I don't even have an opinion on the subject.
3) I can't draw hope.

Modifié par wiggles89, 28 juillet 2010 - 08:01 .


#418
Arijharn

Arijharn
  • Members
  • 2 850 messages
 If I'm understanding Shand correctly, I think he has a slight point. By simulating a mind, whatever it's definition, it is basically deceiving (lying) to you. It's not the same thing as actually having a mind.

If I cheat on a maths test and get a perfect score, then I haven't actually received a perfect score myself, therefore by rights, my result is invalid... I can not truthfully lay claim that my maths score is my own, but is a product of someone elses.

Is that right shand?

To be contrary though (it's my right as a sapient being!) I think Kaiser made an excellent post though a couple of pages back though with his own maths example of what does it matter how something came up with an answer of 25, whether they did it by 5x5, or 5+5+5+5+5, the end result is still the same.

To steer it back to the original OP though, I wouldn't wish for war with the Geth for all sorts of reasons, even after the war with the Reapers, one of the least being the likely high cost that such a war would entail (and more to the point, the statistical likelihood of unfavourable outcomes, I find it unlikely for example you could have a total victory)

I find myself not really caring whether the Geth simulates its intelligence or actually has intelligence, call it benign anthromorphism if you wish, but to me it's representation is enough.

#419
CroGamer002

CroGamer002
  • Members
  • 20 672 messages
Can this thread become necro already? Because "synthetic are evil" discussion is stupid.

#420
Kroesis-

Kroesis-
  • Members
  • 451 messages

Shandepared wrote...
You can transmit that software anywhere though, you can't transmit a human mind.


An organic (at least those from Earth) limitation. Why do you keep insisting on impressing Organic limitations on the Technological? 

Inverness Moon wrote...

Shandepared wrote...
Can it? You can program a computer to display anger as well, but does it actually feel the emotion of anger?

A computer does not exist in the same way that an organic would. The obvious answer is no. And with that I'll add that the idea that you're superior because you perceive electrical signals differently that a computer is prejudiced, at the very least.

The question is, can he prove that he can actually feel the emotion of anger, or any other emotion? Can anyone prove what they're feeling? If it can be simulated down to the last detail, how can you tell the difference? At what point does it stop being just a simulation and start contributing, along with other processes to for something untangable? 

Mesina2 wrote...

Can this thread become necro already? Because "synthetic are evil" discussion is stupid.


Well I think that the discussion has progressed further (in the most part) than the synthetic = evil and on to philosophy of a being's existance depending on origin, but it's gotten to that point where nothing is going to be accepted or going to change a persons mind.

#421
wulf3n

wulf3n
  • Members
  • 1 339 messages

Arijharn wrote...

 If I'm understanding Shand correctly, I think he has a slight point. By simulating a mind, whatever it's definition, it is basically deceiving (lying) to you. It's not the same thing as actually having a mind.

If I cheat on a maths test and get a perfect score, then I haven't actually received a perfect score myself, therefore by rights, my result is invalid... I can not truthfully lay claim that my maths score is my own, but is a product of someone elses.

Is that right shand?

To be contrary though (it's my right as a sapient being!) I think Kaiser made an excellent post though a couple of pages back though with his own maths example of what does it matter how something came up with an answer of 25, whether they did it by 5x5, or 5+5+5+5+5, the end result is still the same.

To steer it back to the original OP though, I wouldn't wish for war with the Geth for all sorts of reasons, even after the war with the Reapers, one of the least being the likely high cost that such a war would entail (and more to the point, the statistical likelihood of unfavourable outcomes, I find it unlikely for example you could have a total victory)

I find myself not really caring whether the Geth simulates its intelligence or actually has intelligence, call it benign anthromorphism if you wish, but to me it's representation is enough.


Then the question remains. How do you tell if an AI is an imitation, of if it actually has thought?

#422
Arijharn

Arijharn
  • Members
  • 2 850 messages

wulf3n wrote...

Arijharn wrote...

 If I'm understanding Shand correctly, I think he has a slight point. By simulating a mind, whatever it's definition, it is basically deceiving (lying) to you. It's not the same thing as actually having a mind.

If I cheat on a maths test and get a perfect score, then I haven't actually received a perfect score myself, therefore by rights, my result is invalid... I can not truthfully lay claim that my maths score is my own, but is a product of someone elses.

Is that right shand?

To be contrary though (it's my right as a sapient being!) I think Kaiser made an excellent post though a couple of pages back though with his own maths example of what does it matter how something came up with an answer of 25, whether they did it by 5x5, or 5+5+5+5+5, the end result is still the same.

To steer it back to the original OP though, I wouldn't wish for war with the Geth for all sorts of reasons, even after the war with the Reapers, one of the least being the likely high cost that such a war would entail (and more to the point, the statistical likelihood of unfavourable outcomes, I find it unlikely for example you could have a total victory)

I find myself not really caring whether the Geth simulates its intelligence or actually has intelligence, call it benign anthromorphism if you wish, but to me it's representation is enough.


Then the question remains. How do you tell if an AI is an imitation, of if it actually has thought?


You can't, in the end I personally think it doesn't matter. Obviously Shand disagree's (largely because of the principle of it 'lying' if I have read his responses correctly).

To argue the semantics though, it's an imitation because it's Artificial!

Can you tell if I actually think, or do you just assume that I do (wait, don't answer that!)

#423
wulf3n

wulf3n
  • Members
  • 1 339 messages

Arijharn wrote...
You can't, in the end I personally think it doesn't matter. Obviously Shand disagree's (largely because of the principle of it 'lying' if I have read his responses correctly).

To argue the semantics though, it's an imitation because it's Artificial!

Can you tell if I actually think, or do you just assume that I do (wait, don't answer that!)


It does raise some interesting questions about thought, and intelligence.

#424
Inverness Moon

Inverness Moon
  • Members
  • 1 721 messages

Arijharn wrote...

 If I'm understanding Shand correctly, I think he has a slight point. By simulating a mind, whatever it's definition, it is basically deceiving (lying) to you. It's not the same thing as actually having a mind.

You might understand Shand, but Shand doesn't understand me, so you're off track.

What is being simulated is the structure of brain (the hardware).

Perhaps another analogy would be appropriate here. Console emulators, they simulate the hardware and software of a game console. They're not actual game consoles ("the real thing"), but the input and output is the same.

Edit: Here is something:

"An emulator in computer sciences duplicates (provides an emulation of) the functions of one system using a different system, so that the second system behaves like (and appears to be) the first system. This focus on exact reproduction of external behavior is in contrast to some other forms of computer simulation, which can concern an abstract model of the system being simulated."

I guess I should use the term emulator from now on.

Kroesis- wrote...
The question is, can he prove that he can actually feel the emotion of anger, or any other emotion? Can anyone prove what they're feeling? If it can be simulated down to the last detail, how can you tell the difference? At what point does it stop being just a simulation and start contributing, along with other processes to for something untangable?

Why does he need to prove it in the first place? In my opinion, it is irrelevant. The geth are software and exist as mathematics. They don't interpret data in the same way as organics.

Modifié par Inverness Moon, 28 juillet 2010 - 01:47 .


#425
Guest_wiggles_*

Guest_wiggles_*
  • Guests
This all reminds me of a discussion I recently had about demarcating science & pseudoscience. After we had no luck, we moved to demarcating chairs & non chairs. It didn't end too well. Is this where this lovely discussion re: the mind is heading? If so, that'd be completely awesome.

Modifié par wiggles89, 28 juillet 2010 - 01:41 .