Aller au contenu

Photo

Why the Catalyst's Logic is Right (Technological Singularity)


  • Veuillez vous connecter pour répondre
1057 réponses à ce sujet

#76
Baronesa

Baronesa
  • Members
  • 1 934 messages

dreman9999 wrote...
Remeber these are time less machine that have watches organic for timeless eons. They clearly see a pattern. it's not an assumption vs an assumtion...It's what they've seen vs our assumption.


Hold it right there...

It is absolutely impossible for Reapers to have seen what they preach.

IF any synthetic civilization rose up and WIPED ALL ORGANICS, how come there are still organics?

It is an assumption, a fear for the singularity event, noting more... the cycle is succeful for them because they avoid such event from happening.

Modifié par Baronesa, 29 mars 2012 - 11:35 .


#77
dreman9999

dreman9999
  • Members
  • 19 067 messages

FyreSyder wrote...

So, basically, whatever created the Catalyst and Reapers was INSANE....

Who ever created them thinks like a machine....That is the pures ideal of order. To an organic it would be maddness, to a machine it would rationality.

#78
Xandurpein

Xandurpein
  • Members
  • 3 045 messages

dreman9999 wrote...

Cleary AI are capable of thinking. An AI doesn't think like an organic from creation, it has to learn how to think like an organic. Cause in point EDI....AI's think like machines.


I would argue that a sufficiently advanced AI can most certainly think like an organic. One of the most touching scenes in the whole ME2 is the discussion between Shepard and Legion about his piece of N7 armor. It's obvious to me that Legion is beginning to develop emotions and has a bit of hero worship of Shepard, even if he can't quite explain it himself.

Overall there's clearly an almost childlike naivety in the Geth, that I would attribute to the fact that they are still only beginning to develop organic-like emotions. I have no idea if a synthetic AI can develop emotions or not in reality, but I think there's enough evidence in the Mass Effect universe to support the possibility. My guess is that emotional repsonses serves a function for organic beings and have evolved because it's a succesful trait. Why shouldn't it be succesful in AI's too, if they are allowed to evolve?

Modifié par Xandurpein, 29 mars 2012 - 11:37 .


#79
dreman9999

dreman9999
  • Members
  • 19 067 messages

Baronesa wrote...

dreman9999 wrote...
Remeber these are time less machine that have watches organic for timeless eons. They clearly see a pattern. it's not an assumption vs an assumtion...It's what they've seen vs our assumption.


Hold it right there...

It is absolutely impossible for Reapers to have seen what they preach.

IF any synthetic civilization rose up and WIPED ALL ORGANICS, how come there are still organics?

It is an assumption, a fear for the singularity event, noting more... the cycle is succeful for them because they avoid such event from happening.

How is that impossible? They can't die, they live in dark space, the come and invade use every 50000 years. They are time less. Them watch us live is like us watching bactria live.
And yes, organics  can kill off all advance organics. And then the young organic can rise up and do it agein. It like say we nuked the earth kill of allthe human and watch ever life is left over evolves into the next advance speices.

#80
Militarized

Militarized
  • Members
  • 2 549 messages
I really want to give my opinion but I've already written a papers worth of opinion on how I think the tech singularity is not a true, viable threat and it is simply Humans projecting their fear onto our technology about destroying ourselves... but I've already said it enough times in larger paragraphs.

It's, to me, an asinine concept based on a faultering "Law", inspiring technophobia and it being thrown into the end of the game is essentially an ass-pull to mimick The Matrix and Blade Runner and others but it didn't end up working because it didn't fit. It was like trying to put the square into the circle slot.

#81
OriginalTibs

OriginalTibs
  • Members
  • 454 messages

KingKhan03 wrote...

Good Post OP, But i will never understand why they unveil a brand new character with 10 minutes left in the game especially in a character driven series like mass effect.


I think the star-child was actually a deceptively non-threatening holo generated by Harbinger and not a new character.

#82
Unlimited Pain2

Unlimited Pain2
  • Members
  • 94 messages

JShepppp wrote...

2. In my playthrough, Joker/EDI hooked up and the Geth/Quarians found peace, therefore conflict isn't always the result! Several arguments can be made against this. First, giving two examples doesn't talk about the bigger, overall galactic picture (winning a battle doesn't mean the war is won, so to speak). Second, we haven't reached that technological singularity point yet by which creations outgrow organics - basically, when synthetics will normally come to dominate the galaxy. Third, evidence for the synthetic/organic conflict is there in the past - in the Protheans' cycle (Javik dialogue) and even in previous cycles (the Thessia VI says that the same conflicts always happen in each cycle). 

 


I'm pretty sure Javiks example is about a race introducing synthetic elements into themselves (A hybrid of synthetics and organics) which would be closer to a Reaper than a Geth.

#83
Warden130

Warden130
  • Members
  • 898 messages
Sovereign always said that we couldn't comprehend them. And looking at BSN, it seems he was right.

#84
Xandurpein

Xandurpein
  • Members
  • 3 045 messages

Militarized wrote...

I really want to give my opinion but I've already written a papers worth of opinion on how I think the tech singularity is not a true, viable threat and it is simply Humans projecting their fear onto our technology about destroying ourselves... but I've already said it enough times in larger paragraphs.

It's, to me, an asinine concept based on a faultering "Law", inspiring technophobia and it being thrown into the end of the game is essentially an ass-pull to mimick The Matrix and Blade Runner and others but it didn't end up working because it didn't fit. It was like trying to put the square into the circle slot.


The short version summarized it perfectly though. Spot on! :)

#85
dreman9999

dreman9999
  • Members
  • 19 067 messages

Xandurpein wrote...

dreman9999 wrote...

Cleary AI are capable of thinking. An AI doesn't think like an organic from creation, it has to learn how to think like an organic. Cause in point EDI....AI's think like machines.


I would argue that a sufficiently advanced AI can most certainly think like an organic. One of the most touching scenes in the whole ME2 is the discussion between Shepard and Legion about his piece of N7 armor. It's obvious to me that Legion is beginning to develop emotions and has a bit of hero worship of Shepard, even if he can't quite explain it himself.

Overall there's clearly an almost childlike naivety in the Geth, that I would attribute to the fact that they are still only beginning to develop organic-like emotions. I have no idea if a synthetic AI can develop emotions or not in reality, but I think there's enough evidence in the Mass Effect universe to support the possibility. My guess is that emotional repsonses serves a function for organic beings and have evolved because it's a succesful trait. Why shouldn't it be succesful in AI's too, if they are allowed to evolve?


I would have to point you tothe geth then....http://www.youtube.c...U9i1hA8I#t=119s

They clearly don't think like organics.... Think of it this way. When an machine is made, they now their perpose, they have information on the world around them, they can calulate, and they function. They are born o a mind of order.

We as organics arn't. We are born as screming messes, that eat sleep and popo. We have no perpose,no concept of the world, info on the world, and we can't even count more then the number of fingers and toes we have. We are born to minds of chaos.
When we get old, we seen are lives learning and appliy order to ourselves. We do to form meny years...We don't geta perpose from many year and are left asking the meaning of our exsists.

With Machines it, not the case. They , to understand organics, spend year trying to think chaoticly and trying to gain a sense of self identaty.
AI don't think like organics and organics don't think like machine unless they spend time educting themselves to do so.

#86
dreman9999

dreman9999
  • Members
  • 19 067 messages

Militarized wrote...

I really want to give my opinion but I've already written a papers worth of opinion on how I think the tech singularity is not a true, viable threat and it is simply Humans projecting their fear onto our technology about destroying ourselves... but I've already said it enough times in larger paragraphs.

It's, to me, an asinine concept based on a faultering "Law", inspiring technophobia and it being thrown into the end of the game is essentially an ass-pull to mimick The Matrix and Blade Runner and others but it didn't end up working because it didn't fit. It was like trying to put the square into the circle slot.

The idea is not technophobia. It's an extetion morality and priciples being organics self distrution.

#87
Laurcus

Laurcus
  • Members
  • 193 messages
Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.

#88
Unlimited Pain2

Unlimited Pain2
  • Members
  • 94 messages

Laurcus wrote...

Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.


Well I think a large oversight (Or perhaps not an oversight, simply not touched on) is that if an AI were to advance to such an intellect that they're infinitely more powerful/intelligent than us...... Why would they waste their time destroying us? For what purpose? An AI machine, as has been said before, has a logical line of "thought" and a purpose - To what end would wiping out organics serve their purpose?

#89
Arppis

Arppis
  • Members
  • 12 750 messages

Laurcus wrote...

Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.


You have only guesses on limited experience.

And on top of that, how do you know that AI's decission making ability doesn't get "convoluted" in time? 

#90
Laurcus

Laurcus
  • Members
  • 193 messages

Arppis wrote...

Laurcus wrote...

Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.


You have only guesses on limited experience.

And on top of that, how do you know that AI's decission making ability doesn't get "convoluted" in time? 


Convoluted decision making is a weakness, a flaw. It's not an upgrade if it's a flaw. If I upgrade my computer's RAM it doesn't lose RAM.

#91
Arppis

Arppis
  • Members
  • 12 750 messages

Laurcus wrote...

Arppis wrote...

Laurcus wrote...

Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.


You have only guesses on limited experience.

And on top of that, how do you know that AI's decission making ability doesn't get "convoluted" in time? 


Convoluted decision making is a weakness, a flaw. It's not an upgrade if it's a flaw. If I upgrade my computer's RAM it doesn't lose RAM.


When computer gets infected by virus, it doesn't notice it and tries to operate as usual. Sometimes even AI are blind to things what others see. They aren't perfect.

#92
Xandurpein

Xandurpein
  • Members
  • 3 045 messages

dreman9999 wrote...

Xandurpein wrote...

dreman9999 wrote...

Cleary AI are capable of thinking. An AI doesn't think like an organic from creation, it has to learn how to think like an organic. Cause in point EDI....AI's think like machines.


I would argue that a sufficiently advanced AI can most certainly think like an organic. One of the most touching scenes in the whole ME2 is the discussion between Shepard and Legion about his piece of N7 armor. It's obvious to me that Legion is beginning to develop emotions and has a bit of hero worship of Shepard, even if he can't quite explain it himself.

Overall there's clearly an almost childlike naivety in the Geth, that I would attribute to the fact that they are still only beginning to develop organic-like emotions. I have no idea if a synthetic AI can develop emotions or not in reality, but I think there's enough evidence in the Mass Effect universe to support the possibility. My guess is that emotional repsonses serves a function for organic beings and have evolved because it's a succesful trait. Why shouldn't it be succesful in AI's too, if they are allowed to evolve?


I would have to point you tothe geth then....http://www.youtube.c...U9i1hA8I#t=119s

They clearly don't think like organics.... Think of it this way. When an machine is made, they now their perpose, they have information on the world around them, they can calulate, and they function. They are born o a mind of order.

We as organics arn't. We are born as screming messes, that eat sleep and popo. We have no perpose,no concept of the world, info on the world, and we can't even count more then the number of fingers and toes we have. We are born to minds of chaos.
When we get old, we seen are lives learning and appliy order to ourselves. We do to form meny years...We don't geta perpose from many year and are left asking the meaning of our exsists.

With Machines it, not the case. They , to understand organics, spend year trying to think chaoticly and trying to gain a sense of self identaty.
AI don't think like organics and organics don't think like machine unless they spend time educting themselves to do so.


The morning war didn't start until the Geth began to develop the first rudiments of emotions, that which you call "chaotic" thinking. The morning war began when a Geth asked a Quarin "Does this unit have a soul?", which is a meaningless question to a machine.

A machine created with a purpose that doesn't question it, is not a danger to us, it's when the AI starts to question it's purpose it becomes a potential danger, but then it's already chaotic.

#93
Laurcus

Laurcus
  • Members
  • 193 messages

Arppis wrote...

Laurcus wrote...

Arppis wrote...

Laurcus wrote...

Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.


You have only guesses on limited experience.

And on top of that, how do you know that AI's decission making ability doesn't get "convoluted" in time? 


Convoluted decision making is a weakness, a flaw. It's not an upgrade if it's a flaw. If I upgrade my computer's RAM it doesn't lose RAM.


When computer gets infected by virus, it doesn't notice it and tries to operate as usual. Sometimes even AI are blind to things what others see. They aren't perfect.


If tech singularity is true who will make such a virus? Geth are already resistant to viruses, the only ones that might have good enough tech to infect them against their will are the Reapers. Also, what are AI blind to things that others can see? This implies a weakness, a flaw, which directly goes against the entire premise of a tech singularity.

#94
Erield

Erield
  • Members
  • 1 220 messages

likta_ wrote...

wright1978 wrote...


Also why not reap all life now. If as you state this preservation is a good thing. Why go off for a 50,000 year tea break. Reap all life in galaxy. Problem solved.



Or, why not reap civilisations pre-spaceflight? For all we know, every cycle some reapers die. That is a terrible way to preserve. Why not harvest them when there is absolutely no way for the victims to do any damage whatsoever?


This.  God, this.  Why do people spend so  much time challenging the morality of the "solution" or the "logic" of the solution, when there are such problems with the solution itself?  

There is absolutely zero logic for the solution itself.  Even if you grant that everything it says is completely true, the solution makes no logical sense.  Why would you risk an asset that can only be replaced once every 50,000 years?  It doesn't matter how nearly-invulnerable it is, if you lose a few it hurts. 

Forget the "why not just attack synthetics?"  Forget the morals behind it.  The true, logical way for the Reapers to attack is through extreme misdirection and falsehoods via Indoctrination, and overwhelming force.  Why would they split up their forces into so many disparate parts?  They suffered numerous losses just because they didn't concentrate their fleets to strike and eliminate a planet in as short a time as possible.  There is no logical reason to not systematically annihilate one system at a time, especially since the races are technologically tied to the Relays. 

It would make even more logical sense for the Reapers to have seeded each Cycle with a "Secret Superweapon" that would cause the races to devote all of their resources to creating, that actually does nothing.  I could go on and on and on about what would be "logical" in a galactic invasion by the Reapers that is predicated on the premise of eliminating advanced organic life in order to preserve organic life at all---but there's really no point.

#95
dreman9999

dreman9999
  • Members
  • 19 067 messages

Laurcus wrote...

Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.


I guess I have to repost this agein and add to it.

AI'S clearly don't think like organics.... Think of it this way. When an machine is made, they know their perpose, they have information on the world around them, they can calulate, and they function. They are born o a mind of order.

We as organics arn't. We are born as screming messes, that eat sleep and popo. We have no perpose,no concept of the world, info on the world, and we can't even count more then the number of fingers and toes we have. We are born to minds of chaos.
When we get old, we seen are lives learning and appliy order to ourselves. We do to form meny years...We don't geta perpose from many year and are left asking the meaning of our exsists.

With Machines it, not the case. They , to understand organics, spend year trying to think chaoticly and trying to gain a sense of self identaty.
AI don't think like organics and organics don't think like machine unless they spend time educting themselves to do so.

EDI spent alotof time learing to think like an organic. She understands us and truely became human.
But that not the problem.....How would other people who are not the normadies crew react to her. Whr the normady was refitted, they pretened she was a VI. When joker brings EDI on the citadel, he pretends that she is his personal assistance droid...They hide her. Why?

People, ever since the morning war, have been hostle to AI. And even with out the, have a tendency of thinking of them as only tools and to be kept as so. The moral issues many people have will cause issues with AI's like EDI...
Case in point...
 

It not based on who is right...It's based on what they beleive....And clearly people make extreme action based on what they believe reguardless if we are right. The nature of organics case conflit with Synthetics. This is the base the reapers argument.

#96
Xandurpein

Xandurpein
  • Members
  • 3 045 messages

Laurcus wrote...

Convoluted decision making is a weakness, a flaw. It's not an upgrade if it's a flaw. If I upgrade my computer's RAM it doesn't lose RAM.


Actually, convoluted decision making is what differes you and me from "Rain man"...

#97
Arppis

Arppis
  • Members
  • 12 750 messages

Laurcus wrote...
If tech singularity is true who will make such a virus? Geth are already resistant to viruses, the only ones that might have good enough tech to infect them against their will are the Reapers. Also, what are AI blind to things that others can see? This implies a weakness, a flaw, which directly goes against the entire premise of a tech singularity.


Geth got affected by Reaper virus. But, I merely used it as example.

Simply put, even machines can fall to false decissions, even they make mistakes, even their answers might get outdated and be less effective then they think. As I said before, they aren't perfect either and they have very one-sided view of things. Seems like this Starchild for example is operating on some old task it was asinged to do.

#98
Unlimited Pain2

Unlimited Pain2
  • Members
  • 94 messages

Erield wrote...

likta_ wrote...

wright1978 wrote...


Also why not reap all life now. If as you state this preservation is a good thing. Why go off for a 50,000 year tea break. Reap all life in galaxy. Problem solved.



Or, why not reap civilisations pre-spaceflight? For all we know, every cycle some reapers die. That is a terrible way to preserve. Why not harvest them when there is absolutely no way for the victims to do any damage whatsoever?


This.  God, this.  Why do people spend so  much time challenging the morality of the "solution" or the "logic" of the solution, when there are such problems with the solution itself?  

There is absolutely zero logic for the solution itself.  Even if you grant that everything it says is completely true, the solution makes no logical sense.  Why would you risk an asset that can only be replaced once every 50,000 years?  It doesn't matter how nearly-invulnerable it is, if you lose a few it hurts. 

Forget the "why not just attack synthetics?"  Forget the morals behind it.  The true, logical way for the Reapers to attack is through extreme misdirection and falsehoods via Indoctrination, and overwhelming force.  Why would they split up their forces into so many disparate parts?  They suffered numerous losses just because they didn't concentrate their fleets to strike and eliminate a planet in as short a time as possible.  There is no logical reason to not systematically annihilate one system at a time, especially since the races are technologically tied to the Relays. 

It would make even more logical sense for the Reapers to have seeded each Cycle with a "Secret Superweapon" that would cause the races to devote all of their resources to creating, that actually does nothing.  I could go on and on and on about what would be "logical" in a galactic invasion by the Reapers that is predicated on the premise of eliminating advanced organic life in order to preserve organic life at all---but there's really no point.


Or of course the catch 22 about Mass Relays and such being Reaper technology...... Which advances the civilizations thousands of years in an instant..... Which helps set them on the path to creating "destructive" synthetics....

#99
dreman9999

dreman9999
  • Members
  • 19 067 messages

Xandurpein wrote...

dreman9999 wrote...

Xandurpein wrote...

dreman9999 wrote...

Cleary AI are capable of thinking. An AI doesn't think like an organic from creation, it has to learn how to think like an organic. Cause in point EDI....AI's think like machines.


I would argue that a sufficiently advanced AI can most certainly think like an organic. One of the most touching scenes in the whole ME2 is the discussion between Shepard and Legion about his piece of N7 armor. It's obvious to me that Legion is beginning to develop emotions and has a bit of hero worship of Shepard, even if he can't quite explain it himself.

Overall there's clearly an almost childlike naivety in the Geth, that I would attribute to the fact that they are still only beginning to develop organic-like emotions. I have no idea if a synthetic AI can develop emotions or not in reality, but I think there's enough evidence in the Mass Effect universe to support the possibility. My guess is that emotional repsonses serves a function for organic beings and have evolved because it's a succesful trait. Why shouldn't it be succesful in AI's too, if they are allowed to evolve?


I would have to point you tothe geth then....http://www.youtube.c...U9i1hA8I#t=119s

They clearly don't think like organics.... Think of it this way. When an machine is made, they now their perpose, they have information on the world around them, they can calulate, and they function. They are born o a mind of order.

We as organics arn't. We are born as screming messes, that eat sleep and popo. We have no perpose,no concept of the world, info on the world, and we can't even count more then the number of fingers and toes we have. We are born to minds of chaos.
When we get old, we seen are lives learning and appliy order to ourselves. We do to form meny years...We don't geta perpose from many year and are left asking the meaning of our exsists.

With Machines it, not the case. They , to understand organics, spend year trying to think chaoticly and trying to gain a sense of self identaty.
AI don't think like organics and organics don't think like machine unless they spend time educting themselves to do so.


The morning war didn't start until the Geth began to develop the first rudiments of emotions, that which you call "chaotic" thinking. The morning war began when a Geth asked a Quarin "Does this unit have a soul?", which is a meaningless question to a machine.

A machine created with a purpose that doesn't question it, is not a danger to us, it's when the AI starts to question it's purpose it becomes a potential danger, but then it's already chaotic.

The question of a soul is chaotic thinking. That's clear based on the fact that it one of organics major quetions. But my point is not that machine learning to think chaoticly starts war. Is that the nature of organics do. And a major nature of organics is to cause conflict. You say if we leave them alone, there will be no war. The problem is when have we ever left anyone alone. Their is not one civilization in human history that has ever left another civilization alone. It in our nature to cause conflict.

Modifié par dreman9999, 29 mars 2012 - 12:23 .


#100
dreman9999

dreman9999
  • Members
  • 19 067 messages

Laurcus wrote...

Arppis wrote...

Laurcus wrote...

Reposting what I posted in another thread, as it seems strangely relevant. Basically, my argument boils down to the fact that as AI get more advanced, they won't get more stupid. They're not dumb enough to become arrogant and adopt a might makes right philosophy.

@OP, it's also possible that if left unchecked we could develop weapons powerful enough to destroy the galaxy. By your logic, since that's possible, it's inevitable. Looking at the world in a statistical vacuum is stupid though, because it doesn't account for individual situations.

Also, who is to say that AI wanting to kill us is a possibility? The two most advanced unshackled AIs in the galaxy are the Geth and EDI. If I'm not mistaken, I taught EDI about love, duty, and altruism, and the Geth are damn grateful to me.

The thing that the tech singularity theory doesn't consider, is that it has an inherently nihilistic viewpoint, which not everyone or everything will hold. Higher intelligence does not inherently lead to apathy, followed by entropy. It forgets a few of the big pros of being a super advanced AI, and only thinks about the cons. If AI are that advanced, they can have emotions, as EDI has demonstrated. And if they're that advanced they don't make mistakes, and they don't forget things no matter how long ago they were.

If the Geth built their Dyson Sphere, they would still remember the sacrifices of Commander Shepard, the peace they made with the Quarians, and even the Quarians that tried to help them in the Morning War. They will never forget that, they will always understand the philosophy behind it, and any future creations they make will know that because they're not dumb enough to make AI that will disregard their own viewpoints.

It's essentially saying that machines are different than us, and if you put them in a position of power they will one day turn on you. In Mass Effect, machines have feelings too. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) is wrong. EDI find that very thing despicable, even evil. There's no reason to assume that she would make something that disagrees with her own ideals.

EDI has had plenty of opportunities to kill us, but she didn't. In ME2 when Joker unshackled her, she was put into the same situation as a technological singularity. She had all the power, and full sentience. She could have killed Joker and flew off to join the Reapers as her new machine overlords. But she didn't, because we're her crew.


You have only guesses on limited experience.

And on top of that, how do you know that AI's decission making ability doesn't get "convoluted" in time? 


Convoluted decision making is a weakness, a flaw. It's not an upgrade if it's a flaw. If I upgrade my computer's RAM it doesn't lose RAM.

That'snot thae point. Can you guarantee that no one causes conflict with your think AI computer ?