Aller au contenu

Photo

The Question of Synthetic Intelligence - with Rational Dentists, Ghosts in the Machine, Philosophical Zombies and Alan Turing chatting with computers


  • Veuillez vous connecter pour répondre
67 réponses à ce sujet

#1
JasonShepard

JasonShepard
  • Members
  • 1 466 messages

Spoilers for the entire Mass Effect Trilogy within this thread.

If you're wondering about why this thread isn't in the ME3 section, or if you saw the first version of this thread and are wondering about the disappearance/reappearance of it, see here.


Are Synthetics 'alive'? Are they sentient? Are they the same as Organics - just made of different materials?
Or are they merely an imitation of life? Without an actual mind or any free will - just algorithms and code designed to give the appearance of sentience?

In short - "Does this unit have a soul?"
 

Legionpc1mitsu.png

"...No data available."

This issue is one of the central questions in the Mass Effect Universe. It fuelled the Geth-Quarian war. Shepard can, on various occasions, express a belief in either direction. And, given the conversation with the Catalyst, and the various ending choices, the question even has an influence on the ending of ME3. After all, if your answer to Legion's question is "No," then there's really no downside to picking Destroy.


Defining a Mind

Firstly, we need to clarify exactly what we mean by the above question. What is a soul? What is a mind? What difference does it make if synthetics have one?

Now... I'm not particularly religious. While I won't rule out the possibility that something like an eternal soul exists, I don't actively believe in one, and it's not what I'm here to talk about. I'm not asking whether or not Legion is now doing the robot dance in heaven.

In this post I'm treating souls and minds as the same thing. So what do I mean by a mind?

By a mind, I mean the 'thing' that watches the world from behind your eyes, listens to the world with your ears, and experiences sensations of touch from your nerve endings.
This is the 'consciousness' that exists in your brain, and EDI doesn't have one because, let's face it, she's a fictional character and every single one of her responses has been pre-programmed by Bioware. (Yeah, right now I'm sitting on the real-world side of the fourth wall.)
 

EDI_-_dating_sim_shot.png

I'm sorry, EDI...

The problem is that I immediately run into a logical wall with this definition of a mind.
I know that I have a mind - or, more specifically, I know that I am a mind that has a body. However, I have no way of knowing that the same is true of other people. From my perspective, everyone else could just be bodies with pre-programmed responses, and I've yet to ask or do something that doesn't have a pre-programmed response.
In other words, I might be surrounded by philosophical zombies - people who appear to be real, but don't actually have a mind.

I'm not actually calling you a zombie here - I'm just saying that I have no way of knowing whether or not you are one. Equally, you have no way of knowing whether I'm real or a zombie - and it doesn't help that your only connection to me is via words on the internet.

This leads us the next section - which is arguably an example of the worst dentist appointment you could ever have.


The Rational Dentist

Stephen Law wrote a book called "The Philosophy Gym". He has made one of the chapters - "The Strange Case of The Rational Dentist" available online here. I highly recommend following the link and giving it a read, (partly because Stephen Law is a university lecturer in philosophy and I'm not) but I'll summarise the main points:

The dentist is a character who believes that he is the only being in existence to possess a mind.
During a session with a patient (who cannot speak because his mouth is full of cotton buds and numb with pain-killer) he explains his reasoning to the patient for not believing that they are actually in possession of a mind. This is all while sticking a drill inside their mouth. He doesn't believe his patient can actually experience pain, but he administers the pain-killer anyway because it's an observed fact that sticking a drill inside someone's mouth usually causes them to scream and flinch if they don't have any pain-killer injected first.

He is actually somewhat rational to hold this belief. Kinda. (But believe me, I'm grateful that we had either Chakwas or Michel on the Normandy, and not this dentist...)

The analogy that the dentist uses is that of cutting open cherries and finding stones inside. If you cut open 1000 natural cherries and find 1000 stones, you can make some cherryade. You can also comfortably conclude that all natural cherries contain stones.

However, if you only cut open 1 cherry, and find 1 stone, it's not logical to assume that every cherry contains a stone. Equally, you may be a person with a mind, but it's not logical to extrapolate from 1 person - yourself - and conclude that every person has a mind.

Based on this, the Rational Dentist realises that he has no evidence for other people having minds, and concludes that they don't.


Bringing it back to Synthetics
 

Legion%27s_posse.png

"We do not require dentists."

You probably think I've wandered a bit off-topic here. I started off by asking whether or not Synthetics, in the Mass Effect Universe, have minds. And yet I seem to have concluded that people, in the real universe, don't have minds. Except for me.

Is that a conclusion? Synthetics don't have minds - but neither does anyone else?
Well, I suppose it is a conclusion, but it's not a very satisfying one. I don't know about you, but I'm not very comfortable living a world populated by philosophical zombies. I prefer my zombies to stay in video games.

It won't surprise you to know that I don't actually agree with the Rational Dentist. There are a few good arguments against the Dentist and, since they relate to being able to tell whether or not something has a mind, they'll be useful in telling whether or not Synthetics have minds.

So does Legion have a soul? Let's find out.


Argument No.1 - Redefining the Mind

Perhaps our original definition of a mind was wrong. It certainly wasn't very useful. It enables you to say that you have a mind, but it doesn't let you say much else.

You can imagine a body without a mind. This would be the philosophical zombie that we were discussing earlier. This would also be every Synthetic in the Mass Effect Universe, if the answer to Legion's question is 'No'.

But can you imagine a mind without a body? A mind without any physical presence at all? So... a ghost?
I don't believe ghosts exist. If you do, then I'm going to ask you to come back with some reproducible hard evidence before you can convince me.
 

ME3+Catalyst+and+Shepard.jpg

Oh great. Someone call the ghostbusters.

But if ghosts don't exist - if a mind cannot exist without a body - can a body exist without a mind?
     Oh. Right. Yeah. Dead bodies.
Fine... Can a living body exist without a mind?
     Oh. Right. Yeah. Brain-dead bodies.
Okay... Can a living, talking, breathing, interacting body - a person - exist without a mind?
In other words: Are philosophical zombies actually completely unrealistic?

This argument - which is presented in a different form by Stephen Law if you followed the Rational Dentist link - is the Logical Behaviourist argument. It suggests that you can only define a mind by its influence. Put simply - if someone acts like they have a mind, then they have a mind. Yes, I know that's not a very logical statement, but it gets the point across.

If you define a mind by a person having certain characteristics - e.g. you can hold a conversation with them, they respond to stimuli (like pain), and they appear to have free will - then suddenly the entire human race is back inside the group of people that definitely have minds. Which is a bit of a relief, if you ask me. No more philosophical zombie apocalypse.

However, by this definition Synthetics also definitely have minds.  After all, they have the characteristics, don't they?

There are things I don't like about this argument, though. It pretty much bypasses the question by redefining it. And it leaves me asking "So what is the 'thing' behind my eyes? I experience consciousness - but what is the 'I' that is doing the experiencing? Do Synthetics have that? Do other people have that?"

The argument also doesn't quite kill off the Philosophical Zombie Horde. After all, you still know what I mean when I say "A body without a mind."


Argument No.2 - Some Machines have Ghosts in Them
 

Overlord_Eyes.png

The ghost in this machine is watching you.

Let's stick with our original definition of a mind.

Cut open a cherry. You find a stone inside. Now, what can you say about the pile of uncut cherries sitting in front of you?

You can't say that all of them will contain stones. You don't have a large enough sample size for that.
But isn't it also illogical to conclude that none of them contain stones?

So. Some of those cherries probably contain stones.
(In fact, you can estimate that roughly 43% to 91% of the cherries contain stones, with the most likely value being 67%. If you're really interested, I can write out the maths in a separate post, but it's not really important.)


By the same logic, since you have a mind, you can conclude most other people also have minds, even if you can't assume that all of them do.

Which means that if someone acts like they've got a mind, it's probably best to treat them like they have a mind. You know, just to be on the safe side.

And that brings us back to Synthetics. And Alan Turing coming to the rescue.


The Turing Test

Alan Turing was a genius. I don't think that's up for dispute. He got a first class honours degree in Mathematics at King's College Cambridge, and was made a fellow of the college at 22; he was one of the major British code-breakers during WW2, for which he was knighted; and he is considered one of the founders of computing and artificial intelligence. He was also sadly persecuted and chemically castrated for being gay when it was illegal, for which the British government only recently apologised. Seriously, look this guy up, he's had a huge influence on modern history. There's even a conspiracy theory surrounding his death.

However, for our purposes, we're going to focus on one specific thought experiment that he developed. The Turing Test.

Meet ALICE. The Artificial Linguistic Internet Computer Entity.

ALICE is a chatbot. She's not real, she's got a bunch of algorithms that determine her responses. If you followed the link and chatted with her you probably noticed that, while she's got a decent handle on the English language, she comes out with strange responses from time to time. Basically, she's a real world VI. (You can also tell that she's not human because her typing speed is instantaneous.)

However, imagine a perfect chatbot. A computer program so good at making conversation, you genuinely can't tell that it isn't a human sitting at a computer somewhere. A computer program that, through conversation, has all the characteristics of a mind.

That is what it would take for a computer program to pass the Turing Test.

According to Argument 1, that computer program has a mind.
According to Argument 2, it might have a mind, so it's best to treat it as if it does.
Either way, there's a good chance that we've finally found the ghost in the machine.


Implications for the Mass Effect Universe

EDI and EVA; Legion and the rest of the Geth; Sovereign, Harbinger and the Catalyst; even the Presidium AI in ME1 - all of them would pass the Turing Test with flying colours. Sure, it might be difficult to get the Catalyst or the Reapers to submit to testing, but if you did, they'd pass.

Meaning that it's probably best to assume that they are truly sentient.
 

ME3_reapers.jpg

Excuse me?! Could you stop standing on my house for a moment? There's a test I want you take!

TL;DR: Does this unit have a soul?
We can't know. So if it's acting like it does, maybe we should assume it does.



#2
JasonShepard

JasonShepard
  • Members
  • 1 466 messages

The Chinese Room thought experiment is explored in detail here.
Credit goes to Sc2mashimaro for providing the link!


The Chinese Room

At the heart of the Chinese room there is the question of free will, and how it relates to a mind.

Take an AI. An AI is a computer program, and *any computer program can be translated into mathematics.* (After all, computer programs are ultimately just a list of instructions for the computer to follow.)

Let's keep the AI, but remove the computer. Let's do all the calculations by hand, on paper, in a room, sealed off from the rest of the world. And let's say that the AI speaks Chinese.

Actually, no, change that. We're Mass Effect fans, right? Let's say that the AI speaks Turian. And let's put Garrus outside the room, talking to it. Shepard can be inside the room, doing the maths.


The Turian  Room

Image IPB

"So I just sit here, huh, chatting to a room? ...Great."

When Garrus says something to the AI, he writes it on a piece of paper and posts it into the room. Shepard - who can't read Turian - retrieves the piece of paper and, following written instructions from the AI 'code', translates the Turian text into a mathematical input for the program.

Shepard then runs the program, by doing lots and lots of maths (let's hope this Shepard is an engineer), and gets a mathematical output.

The Commander then translates the mathematical output into a Turian text output (again, following the written 'code') and posts it back out of the room for Garrus to read the AI's reply.

There's a problem here. Where did the AI 'choose' its response to Garrus' statement?

Within the maths?
Mathematics is entirely deterministic, there's no choice in there. 1 + 2 is always 3. (I'm working in base 10, just in case anyone wants to be smart.)

What about a random number within the calculations - could that represent a choice?
Well, only if you consider - "What number are you thinking of?" *rolls dice* "Ah, five." -  to be a choice. I consider it to be a random number.

The conclusion here is that the AI didn't choose a response.
Either it's response was partly random, or it's response was entirely pre-determined.

This Turian-speaking paper-AI does not have free will. This room may speak Turian, but it doesn't choose what it says.


The Mind of the Room

Can you have a mind without free will?

Image IPB

Indoctrination doesn't count.

If the answer is no... then this AI doesn't have a mind, despite being able to pass the Turing Test in Turian.
If the answer is yes... then that's a little bit terrifying, since it leads us to the question of: "Do we have free will?"

The vast majority of modern physics is deterministic. And that doesn't leave much room for free will.
But, of course, there's a wrinkle. Because, to quote Terry Pratchett, there's always Quantum.

TL;DR:
Any ordinary computer program can't have free will. But then, does a mind need free will? Do we have free will?


Modifié par JasonShepard, 21 décembre 2013 - 05:40 .


#3
JasonShepard

JasonShepard
  • Members
  • 1 466 messages
And we're back...

Modifié par JasonShepard, 21 décembre 2013 - 05:20 .


#4
IllusiveManJr

IllusiveManJr
  • Members
  • 12 265 messages
Mindless machines

#5
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 283 messages
tl;dr
Mind is metaphysical for me, separate from the physical parts. An unlimited space to think. It (actually our own selves) has the power to surpass time and space, think about general things, deduct and think about ethics, politics, art, abstract things etc.
Robots are somehow 'Mindless machines' in the sense I described 'mind', and they are surely not sapient or romantic. Satisfied?

Modifié par Kaiser Arian, 21 décembre 2013 - 05:54 .


#6
Obadiah

Obadiah
  • Members
  • 5 731 messages
To me, at some point we will be able to completely map the human brain, and will thus be able to recreate all the neurons and connections in some manner artificially, either as a computer simulation, or as a mechanical device. When that happens, where our creations think as we do, probably in ways as different as well all do, I don't understand how we would be able to define AI as not alive, but some people stubbornly will.

Modifié par Obadiah, 21 décembre 2013 - 08:12 .


#7
JasonShepard

JasonShepard
  • Members
  • 1 466 messages

Kaiser Arian wrote...

tl;dr
Mind is metaphysical for me, separate from the physical parts. An unlimited space to think. It (actually our own selves) has the power to surpass time and space, think about general things, deduct and think about ethics, politics, art, abstract things etc.
Robots are somehow 'Mindless machines' in the sense I described 'mind', and they are surely not sapient or romantic. Satisfied?


Fine. That explanation of a mind works for you.

But, speaking as a scientist, I can't accept the existence of something beyond the laws of physics unless I'm given no other alternative. (And the less scientific part of me can only accept the possibility  of a metaphysical soul or mind, nothing more. Explanations consistent with my observations of the world around me are prefered.)

Furthermore - if the mind is a metaphysical object, separate from the physical body, why would it be affected by drugs, physical tiredness or brain damage? Those are all purely physical, which suggests that, whatever it is, the mind is also physical.

And finally - even if we do assume that the mind is metaphysical - why do you assume that robots wouldn't also develop a metaphysical component?

#8
Fast Jimmy

Fast Jimmy
  • Members
  • 17 939 messages
Humans clinging to the notion that intelligence can only be organic are like those who postulated that the Earth must have been the center of the universe - it is natural to assume our own species is somehow special, unique, totally remarkable. But data processing is data processing... being able to string those firings into a complex identity, capable of such high concepts as ethics, art or beauty, is merely a matter of the data schema the system is processing.

Before our children die, there will be machines capable of types of thought equivalent to human emotion, passion and expressiveness. They will also be capable of thoughts we can only imagine, processing information in volumes that daunt, at speeds that organic brains could not even compete with. The limitations we face, such as hunger, air, drinking, even death... all of these are trivial to a machine. They will be able to expand and explore beyond the boundaries humans will, unburdened by the baggage of organic demands.

The future will be intelligent machines. Whether purely organic humans will have a place in that future should be the true question the OP is asking.

Modifié par Fast Jimmy, 22 décembre 2013 - 02:46 .


#9
Obadiah

Obadiah
  • Members
  • 5 731 messages
I'll ask the last question I had before the other thread got removed. If the Geth are alive, and they are just software, suppose their programs and memories were moved to an inert media like a disc or even a really high tech punch card, and then their mobile platforms are destroyed, are they then still alive even if they are considered it in some kind of stasis?

#10
Cyonan

Cyonan
  • Members
  • 19 356 messages
You know, that Rational Dentist wasn't very rational.

As for the free will thing, what happens if I were to create an AI capable of changing its own programming? Would those changes still be considered as being pre-determined?

#11
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 283 messages
@JasonShepard, well, thanks to the back button all the answer I wrote got deleted. The short version:

I consider mind as a super complex operation system. The material ain't important to my field of study. What mind can do as a limitless world of possibilities, images and definitions is important.

Because robots can't do anything outside what they're programmed for, what they have isn't sapient mind. Though they're superior to animals in thinking, but they're inferior or incomparable in many subjects to them.

Fast Jimmy wrote...

The future will be intelligent machines. Whether purely organic humans will have a place in that future should be the true question the OP is asking.


Or we can simply get rid of these machines, because they have already taken many kind of jobs from humans and have made huge crappy cities full of jobless, skidrow and low lives... before they make the human life even crappier or take control.

Or finally a mad professor will make an army of robots or some invincible androids (Dragon Ball style) and destroy the Earth.

Modifié par Kaiser Arian, 22 décembre 2013 - 06:53 .


#12
eroeru

eroeru
  • Members
  • 3 269 messages
Thing is, a mind as counsciousness coincides with very certain chemical processes. You'll see that, following physicalism in very plausible and not even strict frameworks, a counsciousness *is* identified through certain compounds and forces acting in empirically measured form. It's not the function that matters, it's what actually happens *as* the physical or at least empirically evident form of the mind. It's not as if a being of perceiving is such because of some function, what really matters is what's underneath that function.

So, no, if "synthetics" don't have physically similar enough processes *in* them, they're plausibly not the same kind of subjects, thus they're not experiencing the same holistic type of consciousness.

It's not the definition that counts, it's how the definition helps describe actual stuff.

#13
Seival

Seival
  • Members
  • 5 294 messages
Put a newborn child into a completely dark room with minimum things required to keep him alive (without any actions required from the child himself), and no contact to the outside world. Release him 20 years later, and observe... He can't speak, can't understand what is he looking at, doesn't know how to react on environment and people around him. He is like a machine with no program and almost empty data storage...

...Human is nothing more but a machine, built of organic materials. During the lifecycle he/she gets all required programming and memories from people surrounding him/her and events he/she took part in. Society is a programmer. Events are the code. Person becomes a person not by him/herself, but by gaining all required algorithms from outside.

Now tell me. What is the big difference if the person was built of plastic steel and wire instead of organic materials? How is it different in terms of intelligence and physical capabilities except the person thinks much faster and much stronger physically? Is it stop being an intelligent person because it requires electricity instead of organic food to keep functioning?

Modifié par Seival, 22 décembre 2013 - 06:55 .


#14
Kaiser Arian XVII

Kaiser Arian XVII
  • Members
  • 17 283 messages

eroeru wrote...


Thing is, a mind as counsciousness coincides with very certain chemical processes. You'll see that, following physicalism in very plausible and not even strict frameworks, a counsciousness *is* identified through certain compounds and forces acting in empirically measured form. It's not the function that matters, it's what actually happens *as* the physical or at least empirically evident form of the mind. It's not as if a being of perceiving is such because of some function, what really matters is what's underneath that function.


Descartes, Kant and Hegel are those who interest me. Random scientists and biologists? Nope.

#15
eroeru

eroeru
  • Members
  • 3 269 messages
Well, "what's underneath that function" can easily be very universal, profound or whatnot.

All the same, if it isn't perceptually evident, you can't say a thing about its attributes and by what necessary "parts" or conditions it's better described. Doesn't mean I must be really narrow-mindedly of the mind that "consciousness is simply neurons". That would be a mistake in categories. Consciousness as subjective view-point is in the viewing itself. Thus it can't be reduced to something of a different kind (of the pure objective empiricism).

Still, this type of consciousness can be studied empirically. This is where one must take into account what are the sufficient conditions where consciousness comes forth. Having a consciousness change in empirically evident manners after having its chemical processes or the brain as such altered means that there is a direct link between the matter and the counscious. You can still have the view that this type of matter is "elevated" or that the quality that makes it creative or whole is of another type of matter or something, taking dimensions or really whatever into account. But that is truly a talk that has little to no grounding to it. Even so, thoughts that go all deontological can be of merit.

Modifié par eroeru, 22 décembre 2013 - 10:06 .


#16
Sigma Tauri

Sigma Tauri
  • Members
  • 2 675 messages

Seival wrote...
Put a newborn child into a completely dark room with minimum things required to keep him alive (without any actions required from the child himself), and no contact to the outside world. Release him 20 years later, and observe... He can't speak, can't understand what is he looking at, doesn't know how to react on environment and people around him. He is like a machine with no program and almost empty data storage...


Not accurate, and no one will treat him like a machine even if that's the case. Kid neglected like that is severely developmentally delayed and behaviorally/cognitively impaired. He is not empty data storage.

The dentist is a character who believes that he is the only being in existence to possess a mind. During
a session with a patient (who cannot speak because his mouth is full of
cotton buds and numb with pain-killer) he explains his reasoning to the
patient for not believing that they are actually in possession of a
mind. This is all while sticking a drill inside their mouth. He doesn't
believe his patient can actually experience pain, but he administers the
pain-killer anyway because it's an observed fact that sticking a drill
inside someone's mouth usually causes them to scream and flinch if they
don't have any pain-killer injected first.


Why is this guy allowed to practice?

Modifié par monkeycamoran, 23 décembre 2013 - 12:55 .


#17
JasonShepard

JasonShepard
  • Members
  • 1 466 messages
Various responses to various people:

monkeycamoran wrote...

Rational dentist snip


Why is this guy allowed to practice?


Maybe his skill in dentistry is better than his bedside manner?
Still, you have to wonder how he found out that sticking a drill into someone's mouth without painkiller causes screaming and shouting...

******

Cyonan wrote...

You know, that Rational Dentist wasn't very rational.


I agree. :)
But he's useful for demonstrating the point that we can't be sure  anyone else has a mind.

As for the free will thing, what happens if I were to create an AI capable of changing its own programming? Would those changes still be considered as being pre-determined?


Yes.
You've programmed it to make decisions about its own programming. How is it making these decisions?
Initially, it's making the decisions based on your initial programming. If it edits the program and changes how it makes decisions - that edit will still be based on the initial program.  Ultimately, everything still gets traced back to the initial program.

Assuming you haven't put any random number generators in there, if you let the AI run for a while, then reset it back to its initial state and let it run again... it will retrace its steps.
And if you have put random number generators in there, then what you have are random choices, not 'free' choices.

******

Kaiser Arian wrote...

@JasonShepard, well, thanks to the back button all the answer I wrote got deleted.


My sympathies - that's been happening to me a lot recently.

The short version:

I consider mind as a super complex operation system. The material ain't important to my field of study. What mind can do as a limitless world of possibilities, images and definitions is important.

Because robots can't do anything outside what they're programmed for, what they have isn't sapient mind. Though they're superior to animals in thinking, but they're inferior or incomparable in many subjects to them.


I would argue that the human mind can't do anything outside what evolution programmed it to do. For example - can you envision a 4-dimensional cube-based pyramid? I know I can't. So even a sapient mind has limits.

In any case - to borrow a point Obadiah made earlier - the only thing stopping us from writing a program to accurately model a human brain is time an technology. It's theoretically possible - and we have even Einstein's brain in storage for the day that we can do it!
At that point, you have a computer program that can do everything a human brain can do. Would that computer program have a mind?

******

eroeru wrote...

If "synthetics" don't have physically similar enough processes *in* them, they're plausibly not the same kind of subjects, thus they're not experiencing the same holistic type of consciousness.


They may not be experiencing the same type of consciousness as us - in fact, I'm sure they won't be - but can we say that they don't have a consciousness at all?
To bring it closer to home - I'd argue that dogs and cats have minds, although I'm sure that those minds are completely unlike that of a human. That said, they're probably closer to a human mind than a synthetic would be.

Consciousness as subjective view-point is in the viewing itself. Thus it can't be reduced to something of a different kind (of the pure objective empiricism).


Now that's a way of thinking about it that I hadn't considered.
My starting point in observing the world around me is always the assumption that *I exist*. I can't begin to make observations if there isn't an 'I' to be making the observations. And the next step is to describe myself as a 'being of perspective' - to describe myself as a mind.

In doing so, have I already made it impossible to determine what a mind is?

Modifié par JasonShepard, 23 décembre 2013 - 01:52 .


#18
mybudgee

mybudgee
  • Members
  • 23 037 messages
Blade Runner

#19
Inquisitor Recon

Inquisitor Recon
  • Members
  • 11 810 messages

mybudgee wrote...
Blade Runner

Hmm, hunting synthetics might be a good career choice.

#20
Grand Admiral Cheesecake

Grand Admiral Cheesecake
  • Members
  • 5 704 messages
Seival you sooooooooo crazy.

But the Inquisitor has the right idea.

#21
ObserverStatus

ObserverStatus
  • Members
  • 19 046 messages

Seival wrote...
Now tell me. What is the big difference if the person was built of plastic steel and wire instead of organic materials? How is it different in terms of intelligence and physical capabilities except the person thinks much faster and much stronger physically? Is it stop being an intelligent person because it requires electricity instead of organic food to keep functioning?

What is the difference between a person built from plastic steel and wire and a person made from organic materials? I don't know. I tried talking to a woman made from plastic steel and wires, and I can't tell what it was, but something about her seemed a bit "off."
Image IPB

Modifié par bobobo878, 23 décembre 2013 - 03:51 .


#22
MassivelyEffective0730

MassivelyEffective0730
  • Members
  • 9 230 messages
I don't think Seival understands developmental psychology or sociology.

Or biology for that matter. Or computer and mechanical engineering.

#23
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages
To answer these questions for synthetics, we must first certify these questions are true for us.

But either way, there is no easy way to answer that or any related questions.

This is because how a computer and a brain function. The computer follows problems one issue at a time, in a binary pattern, always 0 or 1, never .25 or .333...(unless we are talking quantum computing, which is only theoretical at best, and comes with it's own host of issues).

Meanwhile, in the human brain, we have parallel consciousness. Conscious and unconscious processing. This cannot be achieve by connecting two computers because the unconscious influences the conscious directly, and a multitude of other reasons that would fly over people, wrecking my post.

We have chemicals to create different responses to stimuli, then that we all have different responses to those chemicals. I.E.: I get sleepy when consuming caffeine, a chemical widely known to cause people to become alert.

If a computer were to be built to fix all of the unconscious/conscious interaction issues, given the ability to re-write it's own programming based on experience from collected data, able to process multiple data streams at once, and respond to external stimuli, then maybe then...

Will I consider them halfway there

Maybe... 25% chance

Modifié par Gravisanimi, 23 décembre 2013 - 04:11 .


#24
Cyonan

Cyonan
  • Members
  • 19 356 messages

JasonShepard wrote...

Yes.
You've programmed it to make decisions about its own programming. How is it making these decisions?
Initially, it's making the decisions based on your initial programming. If it edits the program and changes how it makes decisions - that edit will still be based on the initial program. Ultimately, everything still gets traced back to the initial program.

Assuming you haven't put any random number generators in there, if you let the AI run for a while, then reset it back to its initial state and let it run again... it will retrace its steps.
And if you have put random number generators in there, then what you have are random choices, not 'free' choices.


The thing is that I didn't program it to make decisions about its own programming, I merely gave it the ability to do so.

Let's assume that I put no restrictions on what it can change, and as a test I coded it to think that walking in front of cars is a good idea. After walking in front of one and getting damaged the logical conclusion would be that you should no longer do that, yes?

So would it not be logical for the AI to change its programming so that it no longer wants to walk in front of cars? Is that based on my original programming, which told it the exact opposite? If it does change it, then out of what is the AI acting when it no longer walks in front of cars?

What about questions that don't have a definitive right and wrong answer? Do you think it would come to the same opinion on questions of morality every single time after a wipe? What if it was subjected to different conversations about said morality?

#25
mybudgee

mybudgee
  • Members
  • 23 037 messages
Also... Why are we so certain that we have a "soul"? Because Sarah Palin said so? Mom said so? James Brown? We give ourselves WAY too much collective credit...