The catalyst just makes no sense
#1
Posté 10 juillet 2012 - 12:20
http://www.holdtheli...-it-part-1.266/
It is kind of long so if you do read it thanks in advance. The tone is meant to be satirical/humor but I would also be lying if I didn't say that much of the tone came from my frustration with the game. Anyway, this is part 1 of 3 parts I am writing to clear my head. I find discussing Mass Effect difficult because it always ends up on teh ending and the ending just makes no sense even after the Extended Cut.
#2
Posté 10 juillet 2012 - 12:25
That being said, interesting read OP.
Modifié par KiwiQuiche, 10 juillet 2012 - 12:26 .
#3
Posté 10 juillet 2012 - 12:38
#4
Posté 10 juillet 2012 - 01:00
LateNightSalami wrote...
I wrote an article about this over at holdtheline.com Here is the link:
http://www.holdtheli...-it-part-1.266/
It is kind of long so if you do read it thanks in advance. The tone is meant to be satirical/humor but I would also be lying if I didn't say that much of the tone came from my frustration with the game. Anyway, this is part 1 of 3 parts I am writing to clear my head. I find discussing Mass Effect difficult because it always ends up on teh ending and the ending just makes no sense even after the Extended Cut.
Your article is very biased, so it's hard to take very seriously but I did read it. I'm not trying to say I think the ending is the best thing in the world, but it certainly does make sense -- at least from my perspective.
I happen to be a computer programmer who specializes in AI (which is actually a lot more boring than you might imagine), and the problem with your article is that you are assigning ''humanistic'' values to a machine. AI's tend to be strictly logical and that logic tends to come off as absurd to us. (funny but slightly off topic example: )
Essentially, at least from a programmers perspective it makes sense. The ''god child'' was tasked with the problem of ''keeping peace'' between Synthetics and Organics and the only solution he could come to was to essentially assimilate (yes, borg style) all organics it could and destroy the rest. You try to throw the argument that killing millions of humans doesn't seem very logical if the idea is to harvest them, but to a machine that literally is just a statistic. As long as it harvests one human before killing off the entire species it probably sees that as acceptable.
Pretty much all of your other jabs at the sanity of this AI can be explained that way, including the choices he gives you. You say ''why would he give you these choices if he 'cares' about what they mean?'' Well, in one sense he does ''care,'' but not in the way you or I would. He's not an emotional creature, so when he expresses desire for one or the other it's because it fills a certain level of favorability on a *programming* level.
Finally, you question why reaching the 'god childs' chambers somehow changes things. For that there's a simply but very unsatisfactory answer. He was programmed that way.
#5
Posté 10 juillet 2012 - 01:03
ZajoE38 wrote...
Coming up with the Catalyst was VERY bad idea. He makes sense, but that doesn't mean I like that idea. There shouldn't have been anything above the Reapers. He just complicated things and made them look like stupid pawns.
No he doesn't.
I don't think people realise the extent to which his appeal to probability invalidates the entire logic of his arguments.
#6
Posté 10 juillet 2012 - 01:05
ZajoE38 wrote...
Coming up with the Catalyst was VERY bad idea. He makes sense, but that doesn't mean I like that idea. There shouldn't have been anything above the Reapers. He just complicated things and made them look like stupid pawns.
wuuuttt catalyst doesn't make any sense whatsoever. no literally he doesn't. I've been sitting here right now with a jackie chan confused look on my face for 4 freakin' months trying to figure out what the **** this kid is, where he came from and what he wants to do exactly. and after 4 months, I can conclude... it makes no sense.
#7
Posté 10 juillet 2012 - 01:24
Any0day wrote...
LateNightSalami wrote...
I wrote an article about this over at holdtheline.com Here is the link:
http://www.holdtheli...-it-part-1.266/
It is kind of long so if you do read it thanks in advance. The tone is meant to be satirical/humor but I would also be lying if I didn't say that much of the tone came from my frustration with the game. Anyway, this is part 1 of 3 parts I am writing to clear my head. I find discussing Mass Effect difficult because it always ends up on teh ending and the ending just makes no sense even after the Extended Cut.
Your article is very biased, so it's hard to take very seriously but I did read it. I'm not trying to say I think the ending is the best thing in the world, but it certainly does make sense -- at least from my perspective.
I happen to be a computer programmer who specializes in AI (which is actually a lot more boring than you might imagine), and the problem with your article is that you are assigning ''humanistic'' values to a machine. AI's tend to be strictly logical and that logic tends to come off as absurd to us. (funny but slightly off topic example: )
Essentially, at least from a programmers perspective it makes sense. The ''god child'' was tasked with the problem of ''keeping peace'' between Synthetics and Organics and the only solution he could come to was to essentially assimilate (yes, borg style) all organics it could and destroy the rest. You try to throw the argument that killing millions of humans doesn't seem very logical if the idea is to harvest them, but to a machine that literally is just a statistic. As long as it harvests one human before killing off the entire species it probably sees that as acceptable.
Pretty much all of your other jabs at the sanity of this AI can be explained that way, including the choices he gives you. You say ''why would he give you these choices if he 'cares' about what they mean?'' Well, in one sense he does ''care,'' but not in the way you or I would. He's not an emotional creature, so when he expresses desire for one or the other it's because it fills a certain level of favorability on a *programming* level.
Finally, you question why reaching the 'god childs' chambers somehow changes things. For that there's a simply but very unsatisfactory answer. He was programmed that way.
Yeah, it is biased. We all have a bias whether we like it or not and mine is from the perspective of someone that did not like the endings and found them to be non-sensical. The insanity idea can literally be replaced with many other explanations and still have the same effect. I just saw the concept on these forums and chose that one to illustrate my point. I think the clearest way to show what I was getting at is contained in your last sentence. You can replace "insanity" with "he was programmed to" and still have the same effect. The whole idea is that his motivations in many ways are downright contradictory and require narrative leaps that should not need to be made.
I interpret him more humanistically because Bioware went to great lengths in ME3 to show that we should treat AI's more humanistically (EDI and the geth). They also seemed to want to give the catalyst humanistic desires and motives. From my perspective there is either a problem with presentation or conception or both.
#8
Posté 10 juillet 2012 - 01:27
Grimwick wrote...
ZajoE38 wrote...
Coming up with the Catalyst was VERY bad idea. He makes sense, but that doesn't mean I like that idea. There shouldn't have been anything above the Reapers. He just complicated things and made them look like stupid pawns.
No he doesn't.
I don't think people realise the extent to which his appeal to probability invalidates the entire logic of his arguments.
I am interested in an elaboration on this.
#9
Posté 10 juillet 2012 - 01:29
Any0day wrote...
Essentially, at least from a programmers perspective it makes sense. The ''god child'' was tasked with the problem of ''keeping peace'' between Synthetics and Organics and the only solution he could come to was to essentially assimilate (yes, borg style) all organics it could and destroy the rest. You try to throw the argument that killing millions of humans doesn't seem very logical if the idea is to harvest them, but to a machine that literally is just a statistic. As long as it harvests one human before killing off the entire species it probably sees that as acceptable.
And that's exactly where he comes up with a fallacious argument. He was designed to stop a conflict. Not necessarily every conflict.
He suddenly decides at one point that 1 synthetic/organic conflict means that there will always be synthetic/organic conflict, then he commits an appeal to probability by saying that:
Because a conflict can happen, it will happen. Therefore we must stop it.
This is logically fallacious and undermines his entire following arguments.
Therefore his resulting arguments/conclusions don't make sense on any level. It's nothing to do with opinions/perspectives.
It's the same kind of logic that we may think for example:Spain may attack us somewhere really far in the future... so we nuke them now so they can't. Does that make sense? No.
Modifié par Grimwick, 10 juillet 2012 - 01:30 .
#10
Posté 10 juillet 2012 - 01:30
It does make sense it's just that none of us are clever enough to figure it out..
#11
Posté 10 juillet 2012 - 01:31
LateNightSalami wrote...
Grimwick wrote...
ZajoE38 wrote...
Coming up with the Catalyst was VERY bad idea. He makes sense, but that doesn't mean I like that idea. There shouldn't have been anything above the Reapers. He just complicated things and made them look like stupid pawns.
No he doesn't.
I don't think people realise the extent to which his appeal to probability invalidates the entire logic of his arguments.
I am interested in an elaboration on this.
See my earlier post to Any0day.
Modifié par Grimwick, 10 juillet 2012 - 01:31 .
#12
Posté 10 juillet 2012 - 01:41
Grimwick wrote...
And that's exactly where he comes up with a fallacious argument. He was designed to stop a conflict. Not necessarily every conflict.
He suddenly decides at one point that 1 synthetic/organic conflict means that there will always be synthetic/organic conflict, then he commits an appeal to probability by saying that:
Because a conflict can happen, it will happen. Therefore we must stop it.
This is logically fallacious and undermines his entire following arguments.
Therefore his resulting arguments/conclusions don't make sense on any level. It's nothing to do with opinions/perspectives.
It's the same kind of logic that we may think for example:Spain may attack us somewhere really far in the future... so we nuke them now so they can't. Does that make sense? No.
I do believe that this was my biggest problem with its "logic".
#13
Posté 10 juillet 2012 - 01:49
Any0day wrote...
Your article is very biased, so it's hard to take very seriously but I did read it. I'm not trying to say I think the ending is the best thing in the world, but it certainly does make sense -- at least from my perspective.
I happen to be a computer programmer who specializes in AI (which is actually a lot more boring than you might imagine), and the problem with your article is that you are assigning ''humanistic'' values to a machine. AI's tend to be strictly logical and that logic tends to come off as absurd to us. (funny but slightly off topic example: )
Essentially, at least from a programmers perspective it makes sense. The ''god child'' was tasked with the problem of ''keeping peace'' between Synthetics and Organics and the only solution he could come to was to essentially assimilate (yes, borg style) all organics it could and destroy the rest. You try to throw the argument that killing millions of humans doesn't seem very logical if the idea is to harvest them, but to a machine that literally is just a statistic. As long as it harvests one human before killing off the entire species it probably sees that as acceptable.
Pretty much all of your other jabs at the sanity of this AI can be explained that way, including the choices he gives you. You say ''why would he give you these choices if he 'cares' about what they mean?'' Well, in one sense he does ''care,'' but not in the way you or I would. He's not an emotional creature, so when he expresses desire for one or the other it's because it fills a certain level of favorability on a *programming* level.
Finally, you question why reaching the 'god childs' chambers somehow changes things. For that there's a simply but very unsatisfactory answer. He was programmed that way.
Unfortunately the Catalyst is not presented as the kind that you are talking about, this particular one is something special, a deity like being...it is plausible to expect an advanced AI such as this one to come up with better logic in its so called solutions without much contradictions
It is unlikely that the fate of galaxy would simply be controlled by a machine that blindly follows orders, this machine must be very resourceful, powerful and knowledgeable about the nature of life, and certainly not make stupid generalisation about rebellion and peace, if the Catalyst understands statistics, then it should know that from the beginning the harvesting was useless because the organics just keep coming
There is no such thing as peace when you killed everyone off, peace in a vaccum is meaningless to anyone, it is not peace that the Catalyst is concerned about, it is the survival of certain people
#14
Posté 10 juillet 2012 - 01:59
#15
Posté 10 juillet 2012 - 02:01
#16
Posté 10 juillet 2012 - 02:04
Xandurpein wrote...
The Creators first built a race of synthetics that threatened to destroy them. Then they created a new synthetic to stop that war, but instead they ended up being destroyed by the new creation. Maybe the Creators were just very lousy AI programmers...
That's the impression that I got.
#17
Posté 10 juillet 2012 - 02:08
#18
Posté 10 juillet 2012 - 02:08
Stornskar wrote...
Good read - and I agree with most of your points. Really, though - the problem with the Catalyst is that his creators were idiots and built him off of a false premise. I reject that synthetics will overtake organics eventually; additionally, these brilliant creators made an AI to broker peace between AI and organics (no bias there) and then supplied him with massive, all-powerful warships. And then they were surprised when the AI killed them? No thanks, I'd rather not deal with any product of that idiocy ...
It still seems stupid, the Geth repeatedly demonstrate that they have no emotions and think logically, they feel no hatred or malice towards organics and only fight against them to defend themselves.
The only time the geth have made an aggresive campaign against organics was invading the Citadel under the control of a Reaper whose goal is to prevent war between organics and AI, well good job there Sovreign....
#19
Posté 10 juillet 2012 - 02:11
Grimwick wrote...
He suddenly decides at one point that 1 synthetic/organic conflict means that there will always be synthetic/organic conflict, then he commits an appeal to probability by saying that:
Because a conflict can happen, it will happen. Therefore we must stop it.
This is logically fallacious and undermines his entire following arguments.
Actually the AI said it was an escalation of severity - where every conflict required him to come up with a different, more radical solution. The solution that we're privy to is the one where all life in the galaxy gets harvested.
Now you could argue that under that line of logic he could have easily deemed all Synthetics the problem and went on a crusade to destroy and harvest all Synthetics (instead of the opposite). The reason this wasn't an acceptable solution is because (his words) organics reach an apex of evolution by relying on Synthetics. He's essentially referring to the prophesied Singularity event (which is a real thing that exists in our world) in which all human life and machines are virtually indistinguishable from one another. This is also why he said a conflict would always arise because at some point in order to achieve that, AI's need to become self aware.
The actual fallacy comes in a different form, and right here. The issue is even if an AI did become self aware, all sci-fi seems to depict them turn on humanity either out of anger or some strange desire of self-preservation. The truth is, a real AI would behave more like Data from Star Trek. Self-preservation is a ''life'' characteristic born out of evolution over billions of years, an AI would never express this behaviour on it's own and even it it did it would not be ''geniune.'' This is not to say that a computer would kill a human with any type of remorse, but it wouldn't not kill a human either. It depends on what it's told to do, and ultimately guess who tells it what to do? Since all sci-fi tends to do this though, I sort of look past it.
#20
Posté 10 juillet 2012 - 02:12
KiwiQuiche wrote...
Of course the Catalyst makes no sense; it's an insane entity that thinks it's Space Jesus and ends up looking like a mentally stunted egomanic.
That being said, interesting read OP.
YUP. You know what the 'problem' with your article is? It isn't mean enough
This point that you mention right here:
"He is saying this in reference to synthetics exterminating organics…What?...How do you know that? Hey transparent pre-adolescent, how do you know that? Did the writers show you something that they did not show the player? Are you able to see into the future to know that all synthetic societies will actively seek out and hunt down organic ones? That seems odd because I just got done showing EDI what it means to be alive. This was after she tried to kill us all on luna when she was still a VI. I also just got done forming a peace between the geth and the quarians. This was only after the minor conflicts of Mass Effect 1 in which the geth tried to kill us all. So in a sense we have already seen exactly what the catalyst is talking about. Then we avoided it. In fact, as he controls the reapers, he witnessed us changing it on Rannoch but still maintains this stance (must be insane count: 6) which we have proven incorrect. "
SAYS IT ALL. And yes, the rest of what you say applies to. But whoever wrote the ending to ME3 must have suffered from short term memory loss. 'You cannot comprehend.' Stated on Rannoch by a Reaper who said the truth. A fully developed mind has a hard time understanding an abortive writing attempt which references nothing, including itself.
But, really, at this point, BioWare KNEW they f*ck*d up and their response was not to actually fix the ending, but to double down on stupid. 'Artistic Integrity.' Which translates to, 'I don't know how to do my job so I'm going to just stick my fingers in my ears and pretend I did.'
#21
Posté 10 juillet 2012 - 02:24
Any0day wrote...
The actual fallacy comes in a different form, and right here. The issue is even if an AI did become self aware, all sci-fi seems to depict them turn on humanity either out of anger or some strange desire of self-preservation. The truth is, a real AI would behave more like Data from Star Trek. Self-preservation is a ''life'' characteristic born out of evolution over billions of years, an AI would never express this behaviour on it's own and even it it did it would not be ''geniune.'' This is not to say that a computer would kill a human with any type of remorse, but it wouldn't not kill a human either. It depends on what it's told to do, and ultimately guess who tells it what to do? Since all sci-fi tends to do this though, I sort of look past it.
Good point actually. Although a society composed entirely of synthetic beings, like the Geth, would most likely eventually be influenced by the forces of evolution too. If there are even slight variations among the different synthetic beings and they do reproduce themselves. Those with more self-preservation would eventually overtake the others simply because they try harder to survive. But as long as these AI are living alongside organics, they would simply do as they are told. The only realistic resolution to the "Morning war" would have been for the Geth to passively let themselves become exterminated.
Modifié par Xandurpein, 10 juillet 2012 - 02:26 .
#22
Posté 10 juillet 2012 - 02:27
Any0day wrote...
Grimwick wrote...
He suddenly decides at one point that 1 synthetic/organic conflict means that there will always be synthetic/organic conflict, then he commits an appeal to probability by saying that:
Because a conflict can happen, it will happen. Therefore we must stop it.
This is logically fallacious and undermines his entire following arguments.
Now you could argue that under that line of logic he could have easily deemed all Synthetics the problem and went on a crusade to destroy and harvest all Synthetics (instead of the opposite). The reason this wasn't an acceptable solution is because (his words) organics reach an apex of evolution by relying on Synthetics. He's essentially referring to the prophesied Singularity event (which is a real thing that exists in our world) in which all human life and machines are virtually indistinguishable from one another. This is also why he said a conflict would always arise because at some point in order to achieve that, AI's need to become self aware.
No, the fallacy is right there in fact.
Premise/evidence: Self aware AIs can cause conflict.
Conclusion: Self aware AIs will always create conflict.
This is a fallacious argument and no matter whether one particular AI caused a conflict, he is overextrapolating his data to every single circumstance... and then making a solution based upon that.
If you were to do that in real life, you would look like an idiot and the SC is no different.
#23
Posté 10 juillet 2012 - 02:30
Any0day wrote...
Grimwick wrote...
He suddenly decides at one point that 1 synthetic/organic conflict means that there will always be synthetic/organic conflict, then he commits an appeal to probability by saying that:
Because a conflict can happen, it will happen. Therefore we must stop it.
This is logically fallacious and undermines his entire following arguments.
The actual fallacy comes in a different form, and right here. The issue is even if an AI did become self aware, all sci-fi seems to depict them turn on humanity either out of anger or some strange desire of self-preservation. The truth is, a real AI would behave more like Data from Star Trek. Self-preservation is a ''life'' characteristic born out of evolution over billions of years, an AI would never express this behaviour on it's own and even it it did it would not be ''geniune.'' This is not to say that a computer would kill a human with any type of remorse, but it wouldn't not kill a human either. It depends on what it's told to do, and ultimately guess who tells it what to do? Since all sci-fi tends to do this though, I sort of look past it.
Well a truly self-aware AI, an unshackled AI, would have the same freedom of mental function as any organic. And synthetics are 'alive'...
#24
Posté 10 juillet 2012 - 02:35
#25
Posté 10 juillet 2012 - 02:37
Any0day wrote...
LateNightSalami wrote...
I wrote an article about this over at holdtheline.com Here is the link:
http://www.holdtheli...-it-part-1.266/
It is kind of long so if you do read it thanks in advance. The tone is meant to be satirical/humor but I would also be lying if I didn't say that much of the tone came from my frustration with the game. Anyway, this is part 1 of 3 parts I am writing to clear my head. I find discussing Mass Effect difficult because it always ends up on teh ending and the ending just makes no sense even after the Extended Cut.
Your article is very biased, so it's hard to take very seriously but I did read it. I'm not trying to say I think the ending is the best thing in the world, but it certainly does make sense -- at least from my perspective.
I happen to be a computer programmer who specializes in AI (which is actually a lot more boring than you might imagine), and the problem with your article is that you are assigning ''humanistic'' values to a machine. AI's tend to be strictly logical and that logic tends to come off as absurd to us. (funny but slightly off topic example: )
Essentially, at least from a programmers perspective it makes sense. The ''god child'' was tasked with the problem of ''keeping peace'' between Synthetics and Organics and the only solution he could come to was to essentially assimilate (yes, borg style) all organics it could and destroy the rest. You try to throw the argument that killing millions of humans doesn't seem very logical if the idea is to harvest them, but to a machine that literally is just a statistic. As long as it harvests one human before killing off the entire species it probably sees that as acceptable.
Pretty much all of your other jabs at the sanity of this AI can be explained that way, including the choices he gives you. You say ''why would he give you these choices if he 'cares' about what they mean?'' Well, in one sense he does ''care,'' but not in the way you or I would. He's not an emotional creature, so when he expresses desire for one or the other it's because it fills a certain level of favorability on a *programming* level.
Finally, you question why reaching the 'god childs' chambers somehow changes things. For that there's a simply but very unsatisfactory answer. He was programmed that way.
I believe Legion would call the bolded "benign anthropomorphism".





Retour en haut




