Aller au contenu

Photo

The Question of Synthetic Intelligence - with Rational Dentists, Ghosts in the Machine, Philosophical Zombies and Alan Turing chatting with computers


  • Veuillez vous connecter pour répondre
67 réponses à ce sujet

#26
Guest_Aotearas_*

Guest_Aotearas_*
  • Guests
People can't even define life or even existence. Hell, they can't even prove existence and by extension life, so how are we supposed to argue what's not life to start with?

First things first.

#27
Seival

Seival
  • Members
  • 5 294 messages

Neofelis Nebulosa wrote...

People can't even define life or even existence. Hell, they can't even prove existence and by extension life, so how are we supposed to argue what's not life to start with?

First things first.


This is all about the attitude. No calculations or definitions are required to form a personal attitude towards something or someone. Personally, I don't care what a person is made of - plastic/wire/steel or flesh/blood/bones. Personality is not a hardware itself, it's a way of thinking provided by that hardware, no matter if this hardware is organic or synthetic.

#28
eroeru

eroeru
  • Members
  • 3 269 messages
Seival - that's actually a pretty swell point. At least from what i make out of it. The question can also be taken as an ethical one, or of some other kind of attitude youre centering on.

I might as well think theres no basis for prescribing consciousness ontologically, yet one can care not about ontology. Might as well have a pragmatist yet kantian ethics-centered view. One simply should act towards synthetics as they do to similarly clever living beings.

edit: Sorry for the bad grammar. Phone's a trouble in this regard.

Modifié par eroeru, 23 décembre 2013 - 07:06 .


#29
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 283 messages

eroeru wrote...

Seival - that's actually a pretty swell point. At least from what i make out of it. The question can also be taken as an ethical one, or of some other kind of attitude youre centering on.

I mighth as well think theres no basis for prescribing consciousness ontologically, yet one can care not about ontology. Might as well have a pragmatist yet kantian ethics-centered view. One simply should act towards synthetics as they do to similarly clever living beings.


Well, my Kantian Pragmatism allows me to replace my bones with Metal amd probably my heart with an artificial one, but doesn't allow to replace my mind with a super X-9999 robot mind.

#30
eroeru

eroeru
  • Members
  • 3 269 messages
I agree that you can't do that. But the arguments are ontological.

He's talking about attitudes with the goal or target of people behaving in a more right way when interacting with artificial tho similarly-acting "beings".

Part of the reasoning can be, for example, that this robot is ontologically a hazy thing.

Nothing said about human consciousness or ascribing its properties to a robot's schematic.

Modifié par eroeru, 23 décembre 2013 - 07:05 .


#31
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages

Neofelis Nebulosa wrote...

People can't even define life or even existence. Hell, they can't even prove existence and by extension life, so how are we supposed to argue what's not life to start with?

First things first.


100% True.

I was basing my entire arguement on that we do. We don't, but of sake of the arguement.

#32
JasonShepard

JasonShepard
  • Members
  • 1 466 messages

Cyonan wrote...

The thing is that I didn't program it to make decisions about its own programming, I merely gave it the ability to do so.


Then how will this program know when to change its code?

You may have given it the ability to change its code, but if you don't tell the program when to use that ability, it never will.
Meaning that even when it changes its code, it is still following your code.

Let's focus on the very first change that this program ever makes to its own code. To keep with your example, lets go with having a robot conclude that walking in front of cars is a bad idea.

1) Following your initial code, the robot walks in front of a car and gets hit.

2) For the robot to register any damage, and decide that what just happened to it was bad, you must have programmed it to notice damage and view damage as 'bad'.

3) It would then run some routines to work out why it walked in front of a car, which would also have to be part of your initial code.

4) The routines - in your initial code - lead the robot to draw the conclusion that it walked in front of the car because of... a specific part of your initial code that told it to walk in front of cars. So far, the robot hasn't made a single decision of its own free will, because it's been following your initial code the entire time.

At this point, it will only make the decision to change its own code IF AND ONLY IF its own code tells it to do so.  I'm going to assume that you have  coded it to decide to change its own code in these circumstances, otherwise it'll just carry on running the initial code.

5) The robot decides to change its initial code. It already knows what its going to change - that silly bit about walking in front of cars. However, it has to decide how to rewrite it. Now, it doesn't really matter whether it decides to rewrite it as "Don't walk in front of cars" or whether it just deletes the silly code. But it does matter that this decision - how the code should be edited - is still controlled by your initial code. The robot still hasn't made a decision of its own free will.

6) The robot makes the edit, and changes its initial code. However, since the edit was entirely  controlled by what the initial code was, the new 'modified code' is still a direct result of your initial code. The robot is still not making any decisions for itself - all of this was pre-determined by your initial code, even though  the robot is now running a modified version of that initial code. The modified version of your code was itself predetermined by your initial code (and the car crash).

Within the context of our Turian room:
If we have the instructions sometimes direct Shepard to rewrite the instructions, it doesn't change the fact that Shepard is still only doing maths. (Shepard doesn't choose how the instructions get re-written, by the way - the instructions would give directions about that based on circumstances. We're purely using the Commander to do the maths and hold the pen and paper.) It does turn the maths into a non-linear system - which makes the maths harder and can become chaotic - but everything is still ultimately pre-determined.

Modifié par JasonShepard, 24 décembre 2013 - 02:37 .


#33
Jorji Costava

Jorji Costava
  • Members
  • 2 584 messages
I'm actually entirely unsure how the free will question is supposed to be related to the question of synthetic consciousness. One seemingly has little to do with the other. If you discovered that your entire life, you had actually been under the control of omniscient MIT neuroscientists who manipulated your brain such that every thought and desire was precisely the one they wanted you to have, would you then conclude that you were never conscious? Probably not.

Second, there's a philosophical tradition going at least as far back as Hobbes called compatibilism, according to which freedom and determinism are, well, compatible. One doesn't exclude the other. If that thesis is true, then a machine operating in an entirely deterministic way is no obstacle at all to its being free. A full discussion of compatibilism will obvoiusly take us a bit far afield, so I'll just link to a very helpful discussion of the issue here.

#34
Guest_JujuSamedi_*

Guest_JujuSamedi_*
  • Guests
OP first off I commend you for adding Alan Turing to your discussion. Alan Turing is practically the father of computer science from a software computational perspective.

One distinction we need to elaborate is on weak A.I and strong A.I, in a weak A.I computer systems are emulating a type of organic behaviour in their computational processing. In strong A.I it is based on the idea that a system could be able to think for itself and basically become sentiment. In regards to the computer system becoming alive they have been arguments against and for, I would rather stand on the for cause it is believable. Speed is not an issue for computer system as computers are becoming faster, if we introduce the concept of a quantum computer. Now an A.I system without software is useless so how about this part? The A.I software will most likely be based on transfer of the human consciousness to a machine, emulating a brain by weak A.I or employing a system that learns from organics like cleverbot. It will take a while to reach that point though cause at the moment we basically have no data. Funny thing is that this A.I will probably be connected to a server as a web service, A.I downloadable to multiple bodies.

#35
AtreiyaN7

AtreiyaN7
  • Members
  • 8 394 messages
My God - a thread about this topic that's actually intelligent and interesting!

I watched an episode of Through the Wormhole with Morgan Freeman a few months ago about robot learning/AI research that shows a robot that actually displays associative learning via the senses its creator gave it - sight, sound, and touch. I will link the clip here: .

As you can see, the (adorable) robot learns to associate different colors as being either good or bad depending on the sensory input it receives. While it is a long, long way from being a sentient being that is our equal (or our superior), it does show that it's possible for a robot/AI to experience the world and learn in a manner similar to our own.

Give a hypothetical AI senses like our own and programming that simulates the functions of the brain - like the equivalent of the temporal lobe (responsible for impulse control, judgment, etc.) and other simulated parts of the brain - and maybe you have a shot at creating something that has a mind and is an independent, free-willed being.

I don't think that programming in basic rules in an AI means that it cannot be independent and lacks free will, because there's a certain amount of basic programming/instinctive behavior in animals and humans (like a newborn foal who can stand up soon after birth and is able to walk on its own without any real help or instruction).

Modifié par AtreiyaN7, 24 décembre 2013 - 06:43 .


#36
eroeru

eroeru
  • Members
  • 3 269 messages
Yeah, but should it use analoguous organic compounds and chemical processes to accurately enough "imitate" us?

I do think so.

#37
Obadiah

Obadiah
  • Members
  • 5 731 messages
But what does "imitate us" even mean? We are all so different, any simulation would result in some sort of new entity that would be different, but alive.

#38
eroeru

eroeru
  • Members
  • 3 269 messages
A clone isn't a simulation, it's a lifeform.

#39
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 283 messages

eroeru wrote...

A clone isn't a simulation, it's a lifeform.


^

A simulation isn't a clone, it's a figurative imitation (software).

#40
Cainhurst Crow

Cainhurst Crow
  • Members
  • 11 374 messages

Seival wrote...

Put a newborn child into a completely dark room with minimum things required to keep him alive (without any actions required from the child himself), and no contact to the outside world. Release him 20 years later, and observe... He can't speak, can't understand what is he looking at, doesn't know how to react on environment and people around him. He is like a machine with no program and almost empty data storage...

...Human is nothing more but a machine, built of organic materials. During the lifecycle he/she gets all required programming and memories from people surrounding him/her and events he/she took part in. Society is a programmer. Events are the code. Person becomes a person not by him/herself, but by gaining all required algorithms from outside.

Now tell me. What is the big difference if the person was built of plastic steel and wire instead of organic materials? How is it different in terms of intelligence and physical capabilities except the person thinks much faster and much stronger physically? Is it stop being an intelligent person because it requires electricity instead of organic food to keep functioning?


Honestly the person in question, if you could even call it that, would react more like a rapid base driven animal who seeks darken, enclosed enviroments and would basically be blind in sunlight/burn like a plant in sunlight and wouldn't be able to handle any noise levels without probably expressing their pain in loud, incomprehendable shrieks. Basically, not a unreactive machine, but a highly reactive, probably highly volitile, scared, in pain animal.

#41
Sigma Tauri

Sigma Tauri
  • Members
  • 2 675 messages

Darth Brotarian wrote...

Honestly the person in question, if you could even call it that


Yes, you can call a child who suffered that amount of neglect a person.

#42
Ninja Stan

Ninja Stan
  • Members
  • 5 238 messages
Ghost in the Shell the movie and the Stand Alone Complex series discuss the nature of sentience and what determines whether something is alive, an individual, or has a soul.

#43
Cainhurst Crow

Cainhurst Crow
  • Members
  • 11 374 messages

monkeycamoran wrote...

Darth Brotarian wrote...

Honestly the person in question, if you could even call it that


Yes, you can call a child who suffered that amount of neglect a person.


Would it though? Such a being, deprived of such essentially needed stimuli and basically turned into a highly photosensitive organism without the ability to comprehend speech, understand physical contact, possibly be unable to move if the conditions were truely to only keep it barely alive(strapped to a bed with an IV would constitute such action), and would most likely have such severe brain deterioration and malnutrition as to be permenantly damaged, that I hesitate to even call such an unnatural and suffering filled exsistence living, much less call the creature in question human anymore.

And if I were allowed to express my opinion further, I would be willing to stipulate that if it were to be locked away for as long as seival suggested, that euthinization would be a much more merciful action then prolonged survival and the prospect of essentially having to keep it in its current state or risk killing it through pure bodily shock in attempts to "rehabilitate" it.

Modifié par Darth Brotarian, 27 décembre 2013 - 09:25 .


#44
mybudgee

mybudgee
  • Members
  • 23 037 messages


(WARNING: DISTURBING)

Modifié par mybudgee, 27 décembre 2013 - 09:28 .


#45
Maria Caliban

Maria Caliban
  • Members
  • 26 094 messages
It's important to point out here that people who have emotional and mental disabilities are typically still far more intelligent than animals. An adult chimpanzee has about the same level of abstract thinking as a three-year-old. There are only a handful of animals that show self awareness.

I'm comfortable saying that infants aren't people because I see psychological complexity as an indicator of personhood, not spiritual value, and because, as a human, I'm willing to give human life inherent worth.

Seival wrote...

Put a newborn child into a completely dark room with minimum things required to keep him alive (without any actions required from the child himself), and no contact to the outside world. Release him 20 years later, and observe... He can't speak, can't understand what is he looking at, doesn't know how to react on environment and people around him. He is like a machine with no program and almost empty data storage...


Now you're just making stuff up. We have cases where children were deprived of freedom and human interaction for their formative years, and they did not turn into machines. They acted like wild animals - humans do have instincts, after all.

Modifié par Maria Caliban, 27 décembre 2013 - 12:34 .


#46
Maria Caliban

Maria Caliban
  • Members
  • 26 094 messages

JasonShepard wrote...

Are Synthetics 'alive'?

No. Life is a specific group of functions, including growth and reproduction.

You could have living synthetic beings though. Battlestar Galactica's Cylons had models that were constructed organics.

Are they sentient?

Earth worms are sentient. If you're talking about the Geth in ME 3, they show basic awareness and response, so yes.

Are they the same as Organics - just made of different materials?

Being connected to a hive mind would suggest they are not the same.

Or are they merely an imitation of life?

They engage in many activities that living creatures do. This doesn't make them an imitation.

My cat and I both eat, but it's not because I'm merely an imitation of a cat. It means we have some things in common. If synthetic beings existed, I'd probably have things in common with them.

For example, a synthetic mean might want to move from one place to another, so it might have legs. A synthetic being might want to perceive its surroundings, so it might have optical and audio receptors.

Alternatively, if this synthetic being was specifically made by someone in order to look like a human (EDI's robot) then it is an imitation of life.


Without an actual mind or any free will - just algorithms and code designed to give the appearance of sentience?

There's every indication that Geth have mental functions though an individual platform is psychologically unsophisticated and complexity only comes when several platforms are in close proximity.

Gravisanimi wrote...

To answer these questions for synthetics, we must first certify these questions are true for us.

Not really.

If I want to talk about the ethics of murder, I don't need to first prove that people exist. If I want to talk about the nature of the mind, I don't need to first prove that there are minds.

There's nothing wrong with making a set of assumptions - people exist, minds are real - in order to focus on the topic, which is whether ME 3 synthetics can be said to have intelligence the same way humans do.

Because if you don't then every philosophical discussion collapses into 'But what if we are all in the Matrix?!?!?!111'

Modifié par Maria Caliban, 27 décembre 2013 - 11:58 .


#47
mybudgee

mybudgee
  • Members
  • 23 037 messages
@Maria: I believe you meant you see 'psychological complexity as an indicator of personhood', not the other way around.

#48
Maria Caliban

Maria Caliban
  • Members
  • 26 094 messages
:o

Fixed. And thank you.

#49
mybudgee

mybudgee
  • Members
  • 23 037 messages
Anytime

:)

#50
Seival

Seival
  • Members
  • 5 294 messages

monkeycamoran wrote...

Darth Brotarian wrote...

Honestly the person in question, if you could even call it that


Yes, you can call a child who suffered that amount of neglect a person.


Personality is a program, no matter on which type of hardware it runs. It can be simple or it can be complicated. It can be corrupted or intact... But still it's a personality. And how exactly to treat a personality - everyone decide for themselves.