Aller au contenu

Photo

The geth


  • Veuillez vous connecter pour répondre
452 réponses à ce sujet

#376
Kroesis-

Kroesis-
  • Members
  • 451 messages

wiggles89 wrote...
& I don't know why so many people in this thread are debating whether geth are "alive" or not. I don't see it benefiting anyone's argument re: the moral stature geth hold.


I suppose it could be said Geth can't hold any moral stature if they're not 'alive'. People were suggesting that they're not, others that they are. Both sides have arguments with very little in a way of being able to prove it.

#377
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

Kroesis- wrote...

...software (the mind).


The mind is more than just software.

#378
Kroesis-

Kroesis-
  • Members
  • 451 messages

Shandepared wrote...

Kroesis- wrote...

...software (the mind).


The mind is more than just software.


Any supporting evidence? If so there are probably hundered of scholars out there who've been trying to determine what the mind is for a long time, I'm sure they'd like your insight.

Modifié par Kroesis-, 27 juillet 2010 - 01:57 .


#379
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

Kroesis- wrote...

Any supporting evidence?


If I take a spoon and scoop out a piece of your brain your 'mind' can be permanently damaged.

The point is, the hardware may be intrinsic to the mind. 

#380
Kroesis-

Kroesis-
  • Members
  • 451 messages
There is a flaw in your argument, I'm organic. I exist as both Hardware and Software, although there is no evidence to show that anything less than catastrophic damage to the brain will affect the Mind, there are many people with irreperable brain damage, yet how do we know that their mind isn't working fine for them, thought processes and ability is impared but how do you know how their the sense of self is? You don't the mind is immeasurable in todays world. Remember the mind is not thought, but more a sense of self and awareness.

#381
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests
What does that have to do with anything? I don't believe in souls or in god, so the mind can' exist without the hardware as far as I'm concerned. It is a product of our brains.

#382
Kroesis-

Kroesis-
  • Members
  • 451 messages

Shandepared wrote...

What does that have to do with anything? I don't believe in souls or in god, so the mind can' exist without the hardware as far as I'm concerned. It is a product of our brains.


Again with a view from a limited organic perspective. Who said anything about god or souls? I'm talking about the sense of self, how you know you're alive.

#383
Mr.Caine

Mr.Caine
  • Members
  • 64 messages

Kroesis- wrote...
I don't see how you can make a stronger case. EDI is by design similar (but not the same) because that is how she was built: Circuitry is her hardware and software is her mind. Could she be transferred from her body (the cicuitry) to another? Quite possibly.

You could build a stronger case mostly because EDI is made from Sovereign; A Reaper. We (as fans) have no idea how advanced or the level of software/hardware she has.

#384
Peer of the Empire

Peer of the Empire
  • Members
  • 2 044 messages
Geth are an abomination and must be enslaved or destroyed.

#385
santaclausemoreau

santaclausemoreau
  • Members
  • 102 messages

Peer of the Empire wrote...

Geth are an abomination and must be enslaved or destroyed.


heh. sounds like a reaper's stance on organics

#386
Rip504

Rip504
  • Members
  • 3 259 messages
Sell Legion to Cerberus and there is no proof of peaceful Geth. Why assume otherwise. Why trust one Legion of Geth,compared to the multiple events and races stating otherwise?

#387
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Shandepared wrote...

What does that have to do with anything? I don't believe in souls or in god, so the mind can' exist without the hardware as far as I'm concerned. It is a product of our brains.


Yeah, cuz I've never heard of someone who isn't a dualist who thinks that our identity consists solely of what our mind contains, not the brain itself.

#388
SuperMedbh

SuperMedbh
  • Members
  • 918 messages

Inverness Moon wrote...

SuperMedbh wrote...

Was Turing right?

Just throwing that out there, as Joker would say ;)

Right about what?


http://en.wikipedia....iki/Turing_test

The short of it is, there is no test for self-awareness, as experiencially we can only affirm our own self-awareness.  Staying away from solipsism (which is, after all, silly), we're left with Turing's proposition that if a computer, or anything else, has responses that are not substantially different from a human's, they can be presumed to be intelligent.

There are a fair number of objections to the Turing test, not the least of which is the near impossibility of applying it objectively.  But as a philosophical point, I think it has merit.

Modifié par SuperMedbh, 27 juillet 2010 - 08:48 .


#389
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Kroesis- wrote...

I suppose it could be said Geth can't hold any moral stature if they're not 'alive'. People were suggesting that they're not, others that they are. Both sides have arguments with very little in a way of being able to prove it.


When we look at humans there are two possibilities re: being alive & sentience & sapience:

1) Human is alive & possesses sentient & sapient capabilities
2) Human is alive but doesn't possess sentient & sapient capabilities

It seems to me at least that 1) holds a greater moral stature than 2). If I were to kill someone from 1) for no reason I consider that an immoral act. However, if I were to kill someone from 2) for no reason I don't consider that an immoral act (depending on where one stands one might consider it a moral act). What's the difference between 1) & 2)? 1) possesses sentient & sapient capabilities. Therefore, being alive isn't what's important, possessing sentient & sapient capabilities is.

Assuming for the sake of argument that geth aren't alive, I find it to be an irrelevent point.

Geth are an abomination and must be enslaved or destroyed.


I think the same about humans sometimes. It isn't like we've got the best track record.

But, hey, y'know, if we presume that they're an abomination, it doesn't logically follow that, in their current state, they should be enslaved or destroyed. Nice try, though.

#390
Kroesis-

Kroesis-
  • Members
  • 451 messages

wiggles89 wrote...
 Therefore, being alive isn't what's important, possessing sentient & sapient capabilities is.


I know (i've spent a good few pages debating about conciousness, sentience and sapience), however some people seem to use 'alive' as an umbrella term or perhaps it's shorter and easier to spell...

#391
chapa3

chapa3
  • Members
  • 520 messages
What I find very bizarre and necessary to explain is how Legion decided to repair itself with Shepards old armor specifically. When Shepard grills Legion on it, Legion only says "No data avaliable".

#392
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Kroesis- wrote...
I know (i've spent a good few pages debating about conciousness, sentience and sapience), however some people seem to use 'alive' as an umbrella term or perhaps it's shorter and easier to spell...


Oh no, I know. I just had this awesome idea that my opinion might be able to make some sort of difference. Then I pressed the submit button & remembered Shand is a part of this thread.

#393
Kroesis-

Kroesis-
  • Members
  • 451 messages

wiggles89 wrote...
Oh no, I know. I just had this awesome idea that my opinion might be able to make some sort of difference. Then I pressed the submit button & remembered Shand is a part of this thread.


Lol

#394
Spornicus

Spornicus
  • Members
  • 512 messages
All of you keep in mind that there is no such thing as an individual geth (with the exception of Legion). Each platform isn't individual but a platform for a certain number of programs, and all geth are tied to essentially one consciousness, i.e. all geth reaching consensus on a question. Then what gets rights? Platforms? Programs? The entire neural network?

#395
Nightwriter

Nightwriter
  • Members
  • 9 800 messages

wiggles89 wrote...

Nightwriter wrote...

I found the Chinese room experiment very interesting.


It's an interesting thought experiment but I don't find it at all persuasive, mainly since the A.I.'s subjective experiences -- or their suppossed subjective experiences -- are being held to a higher standard than we hold each others' subjective experiences.


?

Explain.

#396
scotchtape622

scotchtape622
  • Members
  • 266 messages

Spornicus wrote...

All of you keep in mind that there is no such thing as an individual geth (with the exception of Legion). Each platform isn't individual but a platform for a certain number of programs, and all geth are tied to essentially one consciousness, i.e. all geth reaching consensus on a question. Then what gets rights? Platforms? Programs? The entire neural network?

Exactly. Each individual program is not more than a VI (and probabbly less). It is only when combined when they show any sort of sign of sentience. This is unlike humanity, where if you remove parts of our brain, our functionality greatly decreases, and we probably die.

#397
Nightwriter

Nightwriter
  • Members
  • 9 800 messages

Shandepared wrote...

Kroesis- wrote...

Any supporting evidence?


If I take a spoon and scoop out a piece of your brain your 'mind' can be permanently damaged.

The point is, the hardware may be intrinsic to the mind. 


Well that's not a very good example.

If you destroyed a geth server hub, or took a great chunk out of the hub, that geth's software would be permanently damaged.

The hardware which physically houses the geth programs is their physical "brain". Their software has to have residence somewhere. The only difference is it's possible for them to have multiple residences for their "minds", whereas we only have one.

But if you want to think you're superior because you only have one, hey, go ahead.

#398
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Nightwriter wrote...

?

Explain.


Searle argues that just because an A.I. can simulate an intelligent conversation that doesn't mean that it truly understands the conversation. I agree that if something simulates knowing something it truly doesn't know it. However, we lack the clarity to know in many situations whether something is just a simulation. The other day I had a conversation with a friend about who the best MMA fighter of all time is. He thinks Georges St-Pierre, I think Fedor Emelianenko. Reflecting on the conversation, can I truly tell whether my friend understands the nature of the discussion, or is he just simulating know what he's talking about? From all of the evidence available to me, I can't think of anything that gives me any reason to believe he understood anything in the conversation apart from his behaviour.

If I conducted the same exact conversation with an A.I., the only way I could know whether it understands what I'm saying is if behaves like it does. However, according to Searle, this isn't enough of an indication. If I apply the same standards to the A.I. to everyone I engage in intelligent conversation with, how can I know they're understanding anything I'm saying?

#399
Guest_wiggles_*

Guest_wiggles_*
  • Guests

Spornicus wrote...

All of you keep in mind that there is no such thing as an individual geth (with the exception of Legion). Each platform isn't individual but a platform for a certain number of programs, and all geth are tied to essentially one consciousness, i.e. all geth reaching consensus on a question. Then what gets rights? Platforms? Programs? The entire neural network?


It's pretty simple. Is it self conscious & capable of sentience &/or sapience? If it is, then you've got something that possesses a certain moral stature, something you might call a person.

Modifié par wiggles89, 28 juillet 2010 - 12:41 .


#400
Nightwriter

Nightwriter
  • Members
  • 9 800 messages

wiggles89 wrote...

Searle argues that just because an A.I. can simulate an intelligent conversation that doesn't mean that it truly understands the conversation. I agree that if something simulates knowing something it truly doesn't know it. However, we lack the clarity to know in many situations whether something is just a simulation. The other day I had a conversation with a friend about who the best MMA fighter of all time is. He thinks Georges St-Pierre, I think Fedor Emelianenko. Reflecting on the conversation, can I truly tell whether my friend understands the nature of the discussion, or is he just simulating know what he's talking about? From all of the evidence available to me, I can't think of anything that gives me any reason to believe he understood anything in the conversation apart from his behaviour.

If I conducted the same exact conversation with an A.I., the only way I could know whether it understands what I'm saying is if behaves like it does. However, according to Searle, this isn't enough of an indication. If I apply the same standards to the A.I. to everyone I engage in intelligent conversation with, how can I know they're understanding anything I'm saying?


The argument to which Searle is making a counterpoint consists of an experiment in which a person is fooled into thinking they are talking to a person when they are really talking to a computer, correct?

But the experiment relies upon them being blind. They don't know they might be talking to a computer.

If they did, they would go back and grill the computer, subjecting it to a more rigorous test, until its true limitations became apparent. I believe that eventually, you will realize you're talking to a computer.

If you went back and grilled your friend in the same way, he would pass this test, because he is human.