Aller au contenu

Photo

***SPOILER*** The Origin of the Reapers is silly (and a paradox). ***SPOILER***


  • Veuillez vous connecter pour répondre
232 réponses à ce sujet

#201
Sylvanpyxie

Sylvanpyxie
  • Members
  • 1 036 messages

It is not a paradox.

I stand corrected.

#202
Aesieru

Aesieru
  • Members
  • 4 201 messages

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.

#203
Aesieru

Aesieru
  • Members
  • 4 201 messages

Sylvanpyxie wrote...

It is not a paradox.

I stand corrected.


I too cannot tell when others are sarcastic or not.

#204
Sylvanpyxie

Sylvanpyxie
  • Members
  • 1 036 messages
It was not sarcasm.

Clarification is a wonderful thing.

Modifié par Sylvanpyxie, 05 mars 2012 - 02:12 .


#205
Aesieru

Aesieru
  • Members
  • 4 201 messages

Sylvanpyxie wrote...

It was not sarcasm.

Clarification is a wonderful thing.


People should use red text when they are sarcastic and no colors when they aren't.

#206
Treopod

Treopod
  • Members
  • 81 messages

Aesieru wrote...

Sylvanpyxie wrote...

Is 'paradox' the right term in this case? I keep forgetting how it is misused.

A paradox is a logical statement or group of statements that lead to a contradiction or a situation which (if true) defies logic or reason.

The Reapers kill organics to protect organics.


The Reapers prevent organic races from creating artificial intelligences capable of becoming run-away, they also step in what it happens or when the Vanguards recon shows that it is happening, has happened, or may, or is dangerously close to it. So as to prevent exhaustion of resources by constant expansion and the lack of need for organics and thus their entire extermination and rendered extinction by said run-away intelligences, they eradicate everything related to them and allow the galaxy to repopulate anew.

It is not a paradox.

The AI's are not AI's, they are sapient-constructs that maintain many programs to form a nation inside a embryo made of billions of organics of said species they Reaped.


But can you really call the Reapers sentient or sapient when they lack free will? they are controlled by programming/ the guardian so while they probably are not AI:s because they descend from organic minds, its still not sentient i think.

#207
elm

elm
  • Members
  • 101 messages
Reapers should've been a intergalactic fairing race conquering our galaxy every 50,000 years. just for sport.....

#208
Treopod

Treopod
  • Members
  • 81 messages

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?

#209
Aesieru

Aesieru
  • Members
  • 4 201 messages

Treopod wrote...

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?


We are already developing AI's.

Our own civilization as well as the Human civilization in the game has already begun that path.

#210
Treopod

Treopod
  • Members
  • 81 messages

Aesieru wrote...

Treopod wrote...

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?


We are already developing AI's.

Our own civilization as well as the Human civilization in the game has already begun that path.


Yes, but after the Reaper incident they would systematically destroy the existing AI:s and then learn from that mistake and go another path instead that doesnt require sentient AI:s, like the one im proposing.

and in the real life timeline if we actually are so stupid to make actual AI:s i would think that we are prepared enough to destroy them before they turn on us, and from that we would learn to not pursue it anymore.

#211
GracefulChicken

GracefulChicken
  • Members
  • 556 messages

Treopod wrote...

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?


Because trends in societical growth dont just hit a stopping point at the first sign of a bad idea. I can definately post about 15 graphs that show the exponential growth of information technology. It's not just going to stop when morals get in the way, it never has before. To even keep up with the increase in new tech knowlegde, you'd need to augment human intelligence with a form of that same tech. And this is way before a full blown Sentient AI would be possible, I think. Even by 2029, it's likely we'll have some sort of tech inside of our system at all times. That tech increases exponentially as I've shown.

#212
Aesieru

Aesieru
  • Members
  • 4 201 messages

Treopod wrote...

Aesieru wrote...

Sylvanpyxie wrote...

Is 'paradox' the right term in this case? I keep forgetting how it is misused.

A paradox is a logical statement or group of statements that lead to a contradiction or a situation which (if true) defies logic or reason.

The Reapers kill organics to protect organics.


The Reapers prevent organic races from creating artificial intelligences capable of becoming run-away, they also step in what it happens or when the Vanguards recon shows that it is happening, has happened, or may, or is dangerously close to it. So as to prevent exhaustion of resources by constant expansion and the lack of need for organics and thus their entire extermination and rendered extinction by said run-away intelligences, they eradicate everything related to them and allow the galaxy to repopulate anew.

It is not a paradox.

The AI's are not AI's, they are sapient-constructs that maintain many programs to form a nation inside a embryo made of billions of organics of said species they Reaped.


But can you really call the Reapers sentient or sapient when they lack free will? they are controlled by programming/ the guardian so while they probably are not AI:s because they descend from organic minds, its still not sentient i think.


No they aren't, review the Guardian conversation and dialog and cutscenes in the game again, he doesn't control them so much as remind them of their Prime Directive. Of course, he does have a nifty abort button for special situations but they come back 50k years later.

#213
Nachtritter76

Nachtritter76
  • Members
  • 206 messages
http://www.oxmonline...they-went-along

#214
xtorma

xtorma
  • Members
  • 5 714 messages

Aesieru wrote...

Treopod wrote...

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?


We are already developing AI's.

Our own civilization as well as the Human civilization in the game has already begun that path.


Macio kaku says no way , not until we get other than silicone computer cores. we need either quantum or dna computers. Moores law will plateu very soon.

#215
Aesieru

Aesieru
  • Members
  • 4 201 messages

xtorma wrote...

Aesieru wrote...

Treopod wrote...

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?


We are already developing AI's.

Our own civilization as well as the Human civilization in the game has already begun that path.


Macio kaku says no way , not until we get other than silicone computer cores. we need either quantum or dna computers. Moores law will plateu very soon.


And yet we are, we've known that for a while. It's not a very smart AI, but it is an AI.

#216
Geirahod

Geirahod
  • Members
  • 531 messages

SovereignWillReturn wrote...

I miss the Dark Energy plotline. I liked that one a TON more than this one. A TON more.

Gah. Bioware, you had good ideas at first...


Yep...
in ME2 Dark Energy  is mentioned 2 times, first when you go to recruit Tali in Haestrom (plus...when you speak with Khal Reegar in the trial of Tali) and then when you go to Illium and you find Giana Parasini....

I thought that that was the lead plot for ME3...

#217
Solduri

Solduri
  • Members
  • 198 messages

xtorma wrote...

Aesieru wrote...

Treopod wrote...

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?


We are already developing AI's.

Our own civilization as well as the Human civilization in the game has already begun that path.


Macio kaku says no way , not until we get other than silicone computer cores. we need either quantum or dna computers. Moores law will plateu very soon.


i like macio kaku he comes up with some cool stuff 

#218
Aesieru

Aesieru
  • Members
  • 4 201 messages

Geirahod wrote...

SovereignWillReturn wrote...

I miss the Dark Energy plotline. I liked that one a TON more than this one. A TON more.

Gah. Bioware, you had good ideas at first...


Yep...
in ME2 Dark Energy  is mentioned 2 times, first when you go to recruit Tali in Haestrom (plus...when you speak with Khal Reegar in the trial of Tali) and then when you go to Illium and you find Giana Parasini....

I thought that that was the lead plot for ME3...




More than that actually.

---

It's mentioned Haestrom several times, in Arrival a few times, in the very beginning of Mass Effect 2 when talking with Veetor about his omnitool readings, and on Illium.

#219
Guest_jollyorigins_*

Guest_jollyorigins_*
  • Guests
Ok so I just watched the official bad ending, and wow did bioware f*ck up with this badly. I know it was going to be tough to redeem the plot from ME2 but this was just laughable at best. So the whole motivation of the Reapers is: "we must stop synthetics from destroying organics by becoming synthetics and destroying organics so organics don't build synthetics that destroy organics..."

What the hell happened? Where is that amazing dialogue from Sovereign? oh I give up on this game now, I'll just go back to watching Tali jump off a cliff, that looked quite funny.

#220
jellobell

jellobell
  • Members
  • 3 001 messages

Geirahod wrote...

SovereignWillReturn wrote...

I miss the Dark Energy plotline. I liked that one a TON more than this one. A TON more.

Gah. Bioware, you had good ideas at first...


Yep...
in ME2 Dark Energy  is mentioned 2 times, first when you go to recruit Tali in Haestrom (plus...when you speak with Khal Reegar in the trial of Tali) and then when you go to Illium and you find Giana Parasini....

I thought that that was the lead plot for ME3...

It used to be.

I'm conflicted about the Dark Energy thing. On the one hand, it's a much better motivation for the reapers than what Bioware ended up going with. On the other, it's giving in to "there's always a bigger fish" syndrome, which wouldn't sit well with me after going through three games where the reapers are being built up as the ultimate threat. The revelation that there's another ultimate threat would seem a little contrived.

#221
xtorma

xtorma
  • Members
  • 5 714 messages

Aesieru wrote...

xtorma wrote...

Aesieru wrote...

Treopod wrote...

Aesieru wrote...

Treopod wrote...

GracefulChicken wrote...

Ok, thats fine wikipedia research, but it doesn't nearly scratch the surface. The fact is, by the time augmenting our own intelligence through the typical "GNR" route of the tech singularity theory, AIs will already be in existence. The knowledge to create AIs would come before augmenting our own systems. If anything, AIs would be used as a research tool into augmenting ourselves. It may not be necessary, that I'll reluctantly admit, but it's most likely AI would come before the singularity event itself.


AI:s need to be created by us, and if we know the dangers of them and have been warned, and we know that there exists an alternative which has a potential for intelligence just as big as sentient AI:s do, then i dont see why we would ever pursue the development of such AI:s any longer, so i would say in that case the likelyhood of such an AI existing before we achieve organic tech singularity is small.


An AI which will inevitably exist, can at some point create another AI, that is another of the dangers.


why would an sentient AI inevitably exist when i have showed you a path which makes sentient AI:s useless to us?


We are already developing AI's.

Our own civilization as well as the Human civilization in the game has already begun that path.


Macio kaku says no way , not until we get other than silicone computer cores. we need either quantum or dna computers. Moores law will plateu very soon.


And yet we are, we've known that for a while. It's not a very smart AI, but it is an AI.


the best computers we have now are not even as smart as a cockroach. look it up. i mean if you want to go all technical you can say any computer is an ai. There is a huge difference between the ability to learn , and a sembalance of the ability to learn. All we can do right now is pretend. We need computers that can process more than 1 and 0. we are not even close.

#222
Treopod

Treopod
  • Members
  • 81 messages

GracefulChicken wrote...

Because trends in societical growth dont just hit a stopping point at the first sign of a bad idea. I can definately post about 15 graphs that show the exponential growth of information technology. It's not just going to stop when morals get in the way, it never has before. To even keep up with the increase in new tech knowlegde, you'd need to augment human intelligence with a form of that same tech. And this is way before a full blown Sentient AI would be possible, I think. Even by 2029, it's likely we'll have some sort of tech inside of our system at all times. That tech increases exponentially as I've shown.


So now you are saying we will augment and enhance our own intelligence before Sentient AI is possible? thats the opposite of what you where saying earlier.

and no, unless we feel like we have limited time there is no need for asentient AI, we can develop increasingly powerful computers to calculate for us, to develop new technology and use that to improve our intelligence on our own terms regardless if its slower thanwhat  a sentient AI could do

The computers we use to develop tech can be super powerful but it doesnt need to be sentient and unless we help it in that direction it will never be able to reach sentience on its own.

#223
Treopod

Treopod
  • Members
  • 81 messages

Aesieru wrote...

No they aren't, review the Guardian conversation and dialog and cutscenes in the game again, he doesn't control them so much as remind them of their Prime Directive. Of course, he does have a nifty abort button for special situations but they come back 50k years later.


why would they need to be reminded? they are immortal and just as advanced as the guardian is for all we know.
And if they truly have free will whats preventing them from eventually going against the guardians will? what can he do to stop them unless he can control them?

and what is that abort button you talk about?

Modifié par Treopod, 05 mars 2012 - 03:01 .


#224
Nachtritter76

Nachtritter76
  • Members
  • 206 messages
Interesting that you guys are probably putting way more thought into the storyline than the creators did. I can totally see things happen like this at an interview:

Fan: So, [scenario X], that's the whole story behind the Reapers and the Guardian, right?
BW: Huuuh, oooh yes. As a matter of fact, yeah.
Fan: And you had this in the back of your mind all along?
BW: Yes.
Fan: Wow. Bioware, best writers, smartest guys ever, neatest company, best video games of the year, all years.

#225
Treopod

Treopod
  • Members
  • 81 messages

jellobell wrote...

Geirahod wrote...

SovereignWillReturn wrote...

I miss the Dark Energy plotline. I liked that one a TON more than this one. A TON more.

Gah. Bioware, you had good ideas at first...


Yep...
in ME2 Dark Energy  is mentioned 2 times, first when you go to recruit Tali in Haestrom (plus...when you speak with Khal Reegar in the trial of Tali) and then when you go to Illium and you find Giana Parasini....

I thought that that was the lead plot for ME3...

It used to be.

I'm conflicted about the Dark Energy thing. On the one hand, it's a much better motivation for the reapers than what Bioware ended up going with. On the other, it's giving in to "there's always a bigger fish" syndrome, which wouldn't sit well with me after going through three games where the reapers are being built up as the ultimate threat. The revelation that there's another ultimate threat would seem a little contrived.


But dark energy would be a completely different threat than the reapers themselves, not really comparable. The dark energy is simply a law of the universe which we cant comprehend yet, while reapers are actual antagonists whos purpose would have been based on that threat. it makes perfect sense imo.