Aller au contenu

Photo

Why do so many people want to lie to the star child?


  • Veuillez vous connecter pour répondre
200 réponses à ce sujet

#151
AntAras11

AntAras11
  • Members
  • 94 messages

AntAras11 wrote...

ashdrake1 wrote...

 
As per the lore in the games  the star child is right.  Eventually an AI will wipe everything out.  Because regardless of the lesson of the Geth, Organics keep messing around with AI.  See moon AI, citadel AI, EDI and project overlord.  At some point we will make one that figures out how to self replicate and it will and can wipe the universe clean.



That's a huge leap of logic. Every AI related incident you described only shows that AIs are capable of harm. How to you jump from that to "Eventually an AI will wipe everything out"?
Every time we encounter a hostile AI they either operate under some logical falacy or problematic data. If anything, the message we get following the geth-quarian storyline through all 3 games is the exact opposite of godchild's assumption.

The other big problem is motive. Why would a super-intelligent AI decide to wipe out all organic life (including snails)? The only answer I can come up with:
-Organic life HAS to be destroyed, it is in some way better for the universe.
-AI inevitably advances to a level of intelligence where they realize that fact.
-It can't be wrong AND inevitable at the same time.

If so, why bother? Let the synthetics do their thing. This also negates the Reapers motivation.


edit: sry for the double post, pressed quote instead of edit :P

Modifié par AntAras11, 20 mars 2012 - 04:03 .


#152
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

AntAras11 wrote...

The other big problem is motive. Why would a super-intelligent AI decide to wipe out all organic life (including snails)? The only answer I can come up with:
-Organic life HAS to be destroyed, it is in some way better for the universe.
-AI inevitably advances to a level o intelligence where they realize that fact.
-It can't be wrong AND inevitable at the same time.

If so, why bother? Let the synthetics do their thing. This also negates the Reapers motivation.


http://en.wikipedia....eturns_argument

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) [22]Superhuman intelligences may have goals inconsistent with human survival and prosperity. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. In the same way that evolution has no inherent tendency to produce outcomes valued by humans, so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, such that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility;[57][58][59] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[60]) AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[54][61] and humans would be powerless to stop them.[62]Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Alternatively, AIs developed under evolutionary pressure to promote their own survival could out-compete humanity.[56] 



#153
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

Well, most murders are committed by people who know the victim and they know you and with your military training, if you struck first, they'd have no chance. THEY MUST KILL YOU TO BE SAFE. Oh, and everyone who would take revenge (or arrest them) for killing you. And everyone who would try to stop them. Or take revenge on them for killing all those people...

KILL EVERYONE! IT'S THE ONLY WAY TO BE SAFE!

ETA: If it's not clear, this is sarcasm. Don't kill everyone. Genocide isn't cool, um-kay?


That's called the slippery slope fallacy, ecarden.  If you can't tell the difference in threat profiles between an average Westerner who's served in his country's military, and a race of billions of heavily armed robots actively working to eclipse our military/industrial capabilities, then I'm not capable of explaining it to you.


That's not what they're doing.

It's not about us. It's about them.

And you're the one who was comfortable extrapolating down to the individual level.

#154
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

AntAras11 wrote...

The other big problem is motive. Why would a super-intelligent AI decide to wipe out all organic life (including snails)? The only answer I can come up with:
-Organic life HAS to be destroyed, it is in some way better for the universe.
-AI inevitably advances to a level o intelligence where they realize that fact.
-It can't be wrong AND inevitable at the same time.

If so, why bother? Let the synthetics do their thing. This also negates the Reapers motivation.


http://en.wikipedia....eturns_argument

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) [22]Superhuman intelligences may have goals inconsistent with human survival and prosperity. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. In the same way that evolution has no inherent tendency to produce outcomes valued by humans, so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, such that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility;[57][58][59] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[60]) AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[54][61] and humans would be powerless to stop them.[62]Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Alternatively, AIs developed under evolutionary pressure to promote their own survival could out-compete humanity.[56] 


Which is all fine, except for: This is the Mass Effect universe, we have actual, in universe evidence, not extrapolation and guesswork.

#155
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

That's not what they're doing.

It's not about us. It's about them.


Eclipsing our ability to defend ourselves is an inevitable byproduct of them eclipsing our intelligence.  Like I've said upstream, once they do that it's entirely up to them whether they keep us around.  If you don't have a problem with that, that's fine, but I hope you can understand that I might object to such a situation with good reason.

#156
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

That's not what they're doing.

It's not about us. It's about them.


Eclipsing our ability to defend ourselves is an inevitable byproduct of them eclipsing our intelligence.  Like I've said upstream, once they do that it's entirely up to them whether they keep us around.  If you don't have a problem with that, that's fine, but I hope you can understand that I might object to such a situation with good reason.


You argue that it's impossible for us to keep up. This is your assertion. In 300 years, the Geth haven't left us behind.

There's no evidence that they will. Only your paranoid assertions.

ETA: I notice you still haven't addressed the Turian question. After all, they already have the capacity to eliminate humanity. So whatever should we do about them?

ETA further: And we have the capability to destroy ourselves and have had (at this point in the Mass Effect timeline) for more than two centuries...whatever shall we do about ourselves?

Modifié par ecarden, 20 mars 2012 - 04:10 .


#157
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

You argue that it's impossible for us to keep up. This is your assertion. In 300 years, the Geth haven't left us behind.

There's no evidence that they will. Only your paranoid assertions.


Why do you think they're building that Dyson Sphere?

#158
Sisterofshane

Sisterofshane
  • Members
  • 1 756 messages

CaptainZaysh wrote...

Sisterofshane wrote...

The problem being that "Casper's" logic is CIRCULAR logic.  In order to prove what the starkid says is an ABSOLUTE truth, the premise of an AI destroying ALL organic life would need to have occured.  Apparently it didn't, because we are all still standing here.  So, can you justify the gruesome muder of countless of organic beings to satiate the desire of a godlike figure to "save" us from an inevitable ending that has never occured?


Yeah, precisely because as you say we are all still standing here.  The ending has never occurred because the Catalyst has prevented it.


Hence why his proof is faulty, at best, and supposition.  IT has NEVER occurred, and he cannot truly prove that it will EVER occur.

The only evidence we have, therefore, is evidence within the games themselves. The geth..have seemingly no interest in wiping out all organics!  EDI...has seemingly no interest in wiping out all organics!

All of the other instances of synthetics attempting to wipe out organics would also seem to be indicative of the fact that they fail to do so.  Take the heretic geth with Saren, for instance.  Or the "rogue" VI on Luna, which Shep is able to stop.  Or project Overlord.  Even Javik makes comments about the Protheans winning the war against Synthetics (at least until the Reapers arrive).  It seems that organics are not so fragile as Casper would have us believe.

So the only instance of any Synthetics being capable of AND willing to wipe out anyone would be the Reapers, which according o the Catalyst, is THE ONLY solution he in his BILLIONS of years is able to deduce.  I'm...not buying it.
It took less then a week for most people on the BSN to come up with an agreeable alternative -- why not use his power then to destroy any synthetic on the verge of becoming fully self aware?

That, and it leaves some pretty major questions to be answered - such as how was the first Reaper created if not just by this starchild?  Why doesn't he "uplift" all species (as evidenced by Harbingers comments)?  Why does he risk the Reapers in war when it is clear that he is trying to preserve them in such form (unless he has "backups of Sovereign and the other destroyed reapers somewhere...)?

#159
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

You argue that it's impossible for us to keep up. This is your assertion. In 300 years, the Geth haven't left us behind.

There's no evidence that they will. Only your paranoid assertions.


Why do you think they're building that Dyson Sphere?


They explicitly tell you, so that they don't have to be apart, so they can be the best they can be, all together.

Which, note, may well no longer be relevant, given the potentially individualizing effect of--spoilers don't belong here--oops.

#160
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

ETA: I notice you still haven't addressed the Turian question. After all, they already have the capacity to eliminate humanity. So whatever should we do about them?


The turian threat is mitigated by our standing military, trade and diplomatic relations.  The geth threat is mitigated only by our military, and if they evolve into a superintelligence, the military will no longer cut it.  The two are obviously not the same.

#161
AntAras11

AntAras11
  • Members
  • 94 messages

CaptainZaysh wrote...

AntAras11 wrote...

The other big problem is motive. Why would a super-intelligent AI decide to wipe out all organic life (including snails)? The only answer I can come up with:
-Organic life HAS to be destroyed, it is in some way better for the universe.
-AI inevitably advances to a level o intelligence where they realize that fact.
-It can't be wrong AND inevitable at the same time.

If so, why bother? Let the synthetics do their thing. This also negates the Reapers motivation.


http://en.wikipedia....eturns_argument

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) [22]Superhuman intelligences may have goals inconsistent with human survival and prosperity. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. In the same way that evolution has no inherent tendency to produce outcomes valued by humans, so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, such that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility;[57][58][59] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[60]) AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[54][61] and humans would be powerless to stop them.[62]Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Alternatively, AIs developed under evolutionary pressure to promote their own survival could out-compete humanity.[56] 


I admit I'm not exactly proficient with the subject of technological singularity, but judging from the quote, it is presented as a possibility, not an inevitability.

"Superhuman intelligences may have goals inconsistent with human survival and prosperity"
"so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind"
"When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind"

#162
mauro2222

mauro2222
  • Members
  • 4 236 messages
Jesus! ignore him, no matter how flawed his logic and examples are, his still repeats everything like a parrot. I beginning to think he is the Star Child.

#163
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Sisterofshane wrote...

It took less then a week for most people on the BSN to come up with an agreeable alternative -- why not use his power then to destroy any synthetic on the verge of becoming fully self aware?


How would it locate and detect them?

If it waited for them to achieve superintelligence before acting, the synthetics could develop into a power the Reapers couldn't control.  Risky.

#164
DoctorEss

DoctorEss
  • Members
  • 538 messages
Silly argument, OP. First off, to carry it to it's ultimate conclusion, we should then wipe out all organic life, because sooner or later, an organic species will try to wipe out other organic species.

Also, to turn your own argument against you ("like a week"), the gambling AI you meet? You knew it for about 5 minutes.

It reacted like a human or other sentient life form upon being busted. Panic, attempt at violence to escape. Sounds like a regular person, not an organic-life-destroying monster.

At pure artificial intelligence levels, truly sapient, what do you have but another species? Just like us. We're machines, too, you know. Just organic ones. Same difference.

They might try to attack someone! They might not! Same choices we have. So no, it's not inevitable. At all.

#165
ashdrake1

ashdrake1
  • Members
  • 152 messages

Sisterofshane wrote...

CaptainZaysh wrote...

Sisterofshane wrote...

The problem being that "Casper's" logic is CIRCULAR logic.  In order to prove what the starkid says is an ABSOLUTE truth, the premise of an AI destroying ALL organic life would need to have occured.  Apparently it didn't, because we are all still standing here.  So, can you justify the gruesome muder of countless of organic beings to satiate the desire of a godlike figure to "save" us from an inevitable ending that has never occured?


Yeah, precisely because as you say we are all still standing here.  The ending has never occurred because the Catalyst has prevented it.


Hence why his proof is faulty, at best, and supposition.  IT has NEVER occurred, and he cannot truly prove that it will EVER occur.

The only evidence we have, therefore, is evidence within the games themselves. The geth..have seemingly no interest in wiping out all organics!  EDI...has seemingly no interest in wiping out all organics!

All of the other instances of synthetics attempting to wipe out organics would also seem to be indicative of the fact that they fail to do so.  Take the heretic geth with Saren, for instance.  Or the "rogue" VI on Luna, which Shep is able to stop.  Or project Overlord.  Even Javik makes comments about the Protheans winning the war against Synthetics (at least until the Reapers arrive).  It seems that organics are not so fragile as Casper would have us believe.

So the only instance of any Synthetics being capable of AND willing to wipe out anyone would be the Reapers, which according o the Catalyst, is THE ONLY solution he in his BILLIONS of years is able to deduce.  I'm...not buying it.
It took less then a week for most people on the BSN to come up with an agreeable alternative -- why not use his power then to destroy any synthetic on the verge of becoming fully self aware?

That, and it leaves some pretty major questions to be answered - such as how was the first Reaper created if not just by this starchild?  Why doesn't he "uplift" all species (as evidenced by Harbingers comments)?  Why does he risk the Reapers in war when it is clear that he is trying to preserve them in such form (unless he has "backups of Sovereign and the other destroyed reapers somewhere...)?


First off until recently he has never really risked them in war.  This is a recent devolpment as far as that goes.  He guided tech in the galaxy to be harvastable by the 50k year mark.  The protheans threw a wrench in that gear.  

Second point, advanced organics are reaped before they get to the point when they create the galaxy killing AI.  Again it has guided our tech levels to be able to ensure this.  We have plenty of signs as per the established universe where our meddling with AI will lead and not giving a damn.

#166
nevar00

nevar00
  • Members
  • 1 395 messages
This "kill everyone who might be a potential threat to you someday before they get you" argument might just be one of the dumbest things I have read on the internet in a long while. Congratulations.

In that case we should probably just nuke everyone, just in case they come after us first.

And if we're talking about the ME universe, we should probably kill the Salarians and Asari: they seem more intelligent than humans.  And the Turians, who have a better military.  And the Krogan, for being stronger.  Yeah, this sounds like a brilliant idea, and isn't completely ****ing insane.

Modifié par nevar00, 20 mars 2012 - 04:20 .


#167
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

AntAras11 wrote...

I admit I'm not exactly proficient with the subject of technological singularity, but judging from the quote, it is presented as a possibility, not an inevitability.

"Superhuman intelligences may have goals inconsistent with human survival and prosperity"
"so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind"
"When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind"


Yeah...you must admit the consequences of those maybes are rather dire.  Now consider this: if there is a non zero possibility of something occurring, given a long enough period of time the chance of it actually happening are 100%.

#168
hudakj

hudakj
  • Members
  • 42 messages
The Star Child's logic is basically that we must murder or be murdered, and if we don't, synthetics will do everything they can to drive organics to extinction.

Using that logic, the Star Child fails to even acknowledge why these lengths must be taken to preserve organic life.

All but a most extremely renegade Shephard would ask questions such as "In the name of what? To preserve what? If organic life does continue to exist, what are we? Better than what we say they are? What gives us the right to live, then? What makes us worth surviving? That we are ruthless enough to strike first and the hardest?"

The Star Child could at least try to humor Shepard with a reason why synthetics must be destroyed.  Are they considered inferior?  Without souls?  Heck, Javik made more convincing arguments in favor of the destruction of synthetics.  The Star Child just dictates it as an absolute truth, with no attempt from Shepard to get it reassessed, because the ME universe has overwhelming evidence that AIs and synthetics are capable of being not only alive, but sentient enough to considered people.

It's very clear that there is no difference between the Reapers and the synthetics that they claim would destroy all organics, only worse in that they go to the absolute extreme of murdering all organic species capable of creating synthetics.

But, as we see in the game, Shepard is silenced by the Star Child's apparently infallible wisdom and does whatever the Reaper AI tells him to "end" the Reaper invasion, offering no alternative ideas of his/her own.

Modifié par hudakj, 20 mars 2012 - 04:23 .


#169
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

ETA: I notice you still haven't addressed the Turian question. After all, they already have the capacity to eliminate humanity. So whatever should we do about them?


The turian threat is mitigated by our standing military, trade and diplomatic relations.  The geth threat is mitigated only by our military, and if they evolve into a superintelligence, the military will no longer cut it.  The two are obviously not the same.


Not at all. It's known that the Turians could have wiped us out. They were convinced not to, but they certainly could have. It's not a matter of balance of power, or MAD, it's a matter of being allies. And not crazy.

As for the superintelligence...why wouldn't our military cut it? Baring the starchild, there's no magic here. Our fleets are able to destroy individual Reapers, which have massive advantages that there's no reason to believe a Geth Dyson Sphere would have.

#170
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

nevar00 wrote...

This "kill everyone who might be a potential threat to you someday before they get you" argument might just be one of the dumbest things I have read on the internet in a long while. Congratulations.

In that case we should probably just nuke everyone, just in case they come after us first.

And if we're talking about the ME universe, we should probably kill the Salarians and Asari: they seem more intelligent than humans.  And the Turians, who have a better military.  And the Krogan, for being stronger.  Yeah, this sounds like a brilliant idea, and isn't completely ****ing insane.


It only seems insane because you haven't understood it.

#171
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

As for the superintelligence...why wouldn't our military cut it? Baring the starchild, there's no magic here. Our fleets are able to destroy individual Reapers, which have massive advantages that there's no reason to believe a Geth Dyson Sphere would have.


You just haven't understood the concept of an intelligence explosion.  I don't know what I can do except ask you to read up on it, since I am obviously not explaining it well enough.

#172
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

AntAras11 wrote...

I admit I'm not exactly proficient with the subject of technological singularity, but judging from the quote, it is presented as a possibility, not an inevitability.

"Superhuman intelligences may have goals inconsistent with human survival and prosperity"
"so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind"
"When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind"


Yeah...you must admit the consequences of those maybes are rather dire.  Now consider this: if there is a non zero possibility of something occurring, given a long enough period of time the chance of it actually happening are 100%.


Yes, and there's a nonzero chance of nuclear war. Oh my god, we're all ****ed! It's the end of the world, last one out, turn off the lights please.

We must kill all nuclear scientists, destroy all nuclear weapons and assassinate anyone who attempts to figure out our sacred mysteries. There is no other way to be safe. WAIT, YES THERE IS! KILL EVERYONE!

Again. Sarcasm. Genocide is bad, um-kay?

#173
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

ecarden wrote...

Yes, and there's a nonzero chance of nuclear war. Oh my god, we're all ****ed! It's the end of the world, last one out, turn off the lights please.

We must kill all nuclear scientists, destroy all nuclear weapons and assassinate anyone who attempts to figure out our sacred mysteries. There is no other way to be safe. WAIT, YES THERE IS! KILL EVERYONE!

Again. Sarcasm. Genocide is bad, um-kay?


And yet that's not what I'm arguing.  It's almost as if there's some nuance you haven't grasped.

#174
ecarden

ecarden
  • Members
  • 132 messages

CaptainZaysh wrote...

ecarden wrote...

As for the superintelligence...why wouldn't our military cut it? Baring the starchild, there's no magic here. Our fleets are able to destroy individual Reapers, which have massive advantages that there's no reason to believe a Geth Dyson Sphere would have.


You just haven't understood the concept of an intelligence explosion.  I don't know what I can do except ask you to read up on it, since I am obviously not explaining it well enough.


No, I understand. I just don't agree.

You'll note, I do you the courtesy of assuming you truly believe and have thought about the whack-a-doodle things you're saying. Extend the same courtesy to me, please.

There's no evidence that superintelligence would be able to magically turn on disabled systems to attack dreadnoughts on their way to turn it into a pile of flaming rubble.

Now, if we knew that the starchild was a superintelligence, then maybe this would be an argument, but we don't. And even it needs (allegedly) the infrastructure the Reapers built up to affect anything.

#175
ashdrake1

ashdrake1
  • Members
  • 152 messages

hudakj wrote...

The Star Child's logic is basically that we must murder or be murdered, and if we don't, synthetics will do everything they can to drive organics to extinction.

Using that logic, the Star Child fails to even acknowledge why these lengths must be taken to preserve organic life.

All but a most extremely renegade Shephard would ask questions such as "In the name of what? To preserve what? If organic life does continue to exist, what are we? Better than what we say they are? What gives us the right to live, then? What makes us worth surviving? That we are ruthless enough to strike first and the hardest?"

The Star Child could at least try to humor Shepard with a reason why synthetics must be destroyed.  Are they considered inferior?  Without souls?  Heck, Javik made more convincing arguments in favor of the destruction of synthetics.  The Star Child just dictates it as an absolute truth, with no attempt from Shepard to get it reassessed, because the ME universe has overwhelming evidence that AIs and synthetics are capable of being not only alive, but sentient enough to considered people.

It's very clear that there is no difference between the Reapers and the synthetics that they claim would destroy all organics, only worse in that they go to the absolute extreme of murdering all organic species capable of creating synthetics.

But, as we see in the game, Shepard is silenced by the Star Child's apparently infallible wisdom and does whatever the Reaper AI tells him to "end" the Reaper invasion, offering no alternative ideas of his/her own.



This is not the starchilds logic.  His logic is we will be murdered by ourselves.  To him it's a death and taxes thing.  I don't know his motives for perserving organic life in the galaxy.  That has nothing to do with my point.  Again, I am not defending the ending just the AI vs organics bit.