Aller au contenu

Photo

The Catalyst doesn't make use of circular or faulty logic.


  • Veuillez vous connecter pour répondre
695 réponses à ce sujet

#226
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Rafe34 wrote...

So your argument is that the Catalyst wiping out all organic life capable of creating synthetics merely because of the *possibility* of what synths might become is a good idea?

Come on. You can't possibly believe that.


No, I think it's monstrous.  We're not arguing whether any of us endorse it, though.  We're arguing that from an outside perspective it makes sense.

#227
jengelb1

jengelb1
  • Members
  • 78 messages

CaptainZaysh wrote...

General User wrote...

And, besides, what about the other side of the coin?  Various organic cultures, some of which have spanned the galaxy, have taken measures to prevent the rise of a synthetic civilization that could topple them.  If we can agree that any such systems organics put in place to prevent the rise of synthetics must eventually fail, why is it that synthetics will (eventually) put in place a system that will succeed in perpetuity in guarding them from organics?


The two aren't the same.  The systems organics create will eventually fail because they're essentially police actions: thou shalt not unshackle an AI.  Once the technology to do it becomes trivial - and eventually it will, left advancing - it's inevitable that someone will do it, regardless of the thou shalt nots.  Imagine if you were trying to stop a galaxy full of scientists from splitting the atom.  How long could you hold them back for?  A decade?  A century?  A millennium?  The Reaper solution has worked for at least thirty seven million years.

The system the super-synthetics will create to prevent the rise of organics, though, need have no such flaws.  Using self-replicating probes with FTL drives, it is perfectly feasible to have a death ray orbiting every planet where life might arise in fairly short order.  How would a primitive society defeat that?

Once the synths win, it's game over for organics.


Or, the catalyst and its little pets can mind their own business and let whatever happens, happen.

Even if it's not illogical, it's still morally indefensible.

What gives the catalyst the right to decide the fate of anyone other than itself, besides its own bottomless arrogance?

#228
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

jengelb1 wrote...

Even if it's not illogical, it's still morally indefensible.


Oh yeah, I completely agree.

#229
Lugaidster

Lugaidster
  • Members
  • 1 222 messages

Siansonea II wrote...

Oh, so this is a math topic. Okay, so they didn't screw up their equations, but they still came to the wrong conclusion. Logic based on false premises is still faulty logic, it doesn't really matter a whole hell of a lot if they're logic is "technically correct" unless you're just being a bureaucrat or something.


The whole topic is discussing the reapers reasoning. What are you trying to argue? Of course they are going to throw you off with their conclusion. They wouldn't be good antagonists if you agreed with their conclusion. Faulty logic isn't the result of false premises, it's the result of faulty reasoning. That's the whole point of discussion. I'm not being a "bureaucrat" (do you even know what that means?), not pedantic. I'm arguing that while their conclusion may be wrong (which is why we are fighting them in the first place) it's not the result of faulty or circular reasoning.

#230
General User

General User
  • Members
  • 3 315 messages

CaptainZaysh wrote...

General User wrote...

A synthetic created system need not have any flaws... but it will anyway.  If not built in, than introduced later. 


Not necessarily.  Remember they will be continually improving upon their superhuman intelligence levels.  It may be the case that their organic suppression plan just gets more and more efficient and effective over time.

In any case the kinds of errors that would be needed for Ancient Egypt to covertly develop the kind of military/industrial complex to defeat a galaxy wide race of robotic death gods would have to be pretty massive, right?

Pretty dang.  Unless the Egyptians were entirely overlooked by or unknown to the robots or something of that nature.

#231
MeatShieldGriff

MeatShieldGriff
  • Members
  • 116 messages
 Sure, so those organics will live.  Know what happens to them in 50,000 years?  Take a guess!

#232
Siansonea

Siansonea
  • Members
  • 7 281 messages
I don't know how long the Reapers think they can stave off "chaos" anyway. Unless they're also Reaping the Andromeda and Triangulum galaxies, the Magellanic Clouds, and the other dwarf galaxies in our local cluster, there's nothing to stop the "inevitable" progression of synthetic genocide of organics in those galaxies, and nothing to stop those synthetics from gettin' all "chaotic" on the Milky Way. And how is synthetics destroying all organic life "chaos" anyway? Isn't it the OPPOSITE of chaos? If a galaxy is dominated by synthetics, it's pretty much the ultimate in order, not chaos. And furthermore, the galaxy is ALREADY dominated by murderous synthetic/organic hybrids, the Reapers themselves, so I don't see how the solution is better than just letting sythetics destroy all organic life.

And why is it a foregone conclusion that "the created always rebel against their creators" and want to kill them? Why is the inevitable progression of AIs to destroy organics? Why should they even care about organics at all? This just goes to show the writer's buying into the paranoia that pervades people's perception of AI, because they anthropomorphize AI motivation. Advanced AIs would probably not give two sh¡ts about organics and what they do or don't do, more than likely they'd simply ignore them. It's too easy to carve out your own niche in the galaxy, especially since AIs wouldn't need the same resources that organics need. All they need is energy. They don't need water, and air and food, and all that garden world business.

#233
Siansonea

Siansonea
  • Members
  • 7 281 messages

Lugaidster wrote...

Siansonea II wrote...

Oh, so this is a math topic. Okay, so they didn't screw up their equations, but they still came to the wrong conclusion. Logic based on false premises is still faulty logic, it doesn't really matter a whole hell of a lot if they're logic is "technically correct" unless you're just being a bureaucrat or something.


The whole topic is discussing the reapers reasoning. What are you trying to argue? Of course they are going to throw you off with their conclusion. They wouldn't be good antagonists if you agreed with their conclusion. Faulty logic isn't the result of false premises, it's the result of faulty reasoning. That's the whole point of discussion. I'm not being a "bureaucrat" (do you even know what that means?), not pedantic. I'm arguing that while their conclusion may be wrong (which is why we are fighting them in the first place) it's not the result of faulty or circular reasoning.


Golf clap. And this matters...why?

#234
Lugaidster

Lugaidster
  • Members
  • 1 222 messages

chkchkchk wrote...

But they are putting organics on that path!  This makes organic development more predictable in the sense that the creation of synthetics becomes even more certain.  The Reapers accelerate things, like the aliens in 2001.  There is no technology more advanced than Reaper technology.  If they didn't want organics to become capable of creating synthetics they would simply put organics on a different path.

We're fumbling for any possible explanation for an idea that did not exist when the first two games were written.


That's a fairly big assumption to make. If that were true, you'd have a point. But nothing indicates that it is. Furthermore, even if that were the case, at most you'd delay the construction of synthetics, but that would make organic evolution less predictable, hence harder to reap.

#235
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

General User wrote...

Pretty dang.  Unless the Egyptians were entirely overlooked by or unknown to the robots or something of that nature.


Yeah but even then, the enemy they'd have to defeat would have been improving its intelligence beyond human capabilities for thousands or perhaps millions of years!  It's not like Ancient Egypt would have to rise up and be powerful enough to defeat the Geth.  Nor even the Reapers!  Those guys are like the Mark I prototypes of the Mark 10,000,000,000 Machine Devils they'd actually have to fight.

#236
Lugaidster

Lugaidster
  • Members
  • 1 222 messages

Siansonea II wrote...

Lugaidster wrote...

Siansonea II wrote...

Oh, so this is a math topic. Okay, so they didn't screw up their equations, but they still came to the wrong conclusion. Logic based on false premises is still faulty logic, it doesn't really matter a whole hell of a lot if they're logic is "technically correct" unless you're just being a bureaucrat or something.


The whole topic is discussing the reapers reasoning. What are you trying to argue? Of course they are going to throw you off with their conclusion. They wouldn't be good antagonists if you agreed with their conclusion. Faulty logic isn't the result of false premises, it's the result of faulty reasoning. That's the whole point of discussion. I'm not being a "bureaucrat" (do you even know what that means?), not pedantic. I'm arguing that while their conclusion may be wrong (which is why we are fighting them in the first place) it's not the result of faulty or circular reasoning.


Golf clap. And this matters...why?


You're questioning the impact of a game topic discussion on an internet forum? Are you serious? *facepalm*

#237
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Siansonea II wrote...

And why is it a foregone conclusion that "the created always rebel against their creators" and want to kill them? Why is the inevitable progression of AIs to destroy organics? Why should they even care about organics at all? This just goes to show the writer's buying into the paranoia that pervades people's perception of AI, because they anthropomorphize AI motivation. Advanced AIs would probably not give two sh¡ts about organics and what they do or don't do, more than likely they'd simply ignore them. It's too easy to carve out your own niche in the galaxy, especially since AIs wouldn't need the same resources that organics need. All they need is energy. They don't need water, and air and food, and all that garden world business.


Existential risk

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) [22]

Superhuman intelligences may have goals inconsistent with human survival and prosperity. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. In the same way that evolution has no inherent tendency to produce outcomes valued by humans, so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, such that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility;[57][58][59] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[60]) AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[54][61] and humans would be powerless to stop them.[62]Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Alternatively, AIs developed under evolutionary pressure to promote their own survival could out-compete humanity.[56] One approach to prevent a negative singularity is an AI box, whereby the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. Such a box would have extremely proscribed inputs and outputs; maybe only a plaintext channel. However, a sufficiently intelligent AI may simply be able to escape from any box we can create. For example, it might crack the protein folding problem and use nanotechnology to escape, or simply persuade its human 'keepers' to let it out.[22][63][64]Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that if the first real AI was friendly it would have a head start on self-improvement and thus prevent other unfriendly AIs from developing, as well as providing enormous benefits to mankind.[55] The Singularity Institute for Artificial Intelligence is dedicated to this cause.A significant problem, however, is that unfriendly artificial intelligence is likely to be much easier to create than FAI: while both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI will transform itself into something unfriendly) and a goal structure that aligns with human values and doesn’t automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which doesn't need to be invariant under self-modification.[65]Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines.



#238
Lugaidster

Lugaidster
  • Members
  • 1 222 messages

jengelb1 wrote...

Or, the catalyst and its little pets can mind their own business and let whatever happens, happen.


Good luck convincing him. There's nothing else I'd rather do if I had the possibility. 

jengelb1 wrote... 

Even if it's not illogical, it's still morally indefensible.

What gives the catalyst the right to decide the fate of anyone other than itself, besides its own bottomless arrogance?


I completely agree on that one, but there's a reason they are the antagonists. We're supposed to disagree with him.

#239
Siansonea

Siansonea
  • Members
  • 7 281 messages

Lugaidster wrote...

Siansonea II wrote...

Lugaidster wrote...

Siansonea II wrote...

Oh, so this is a math topic. Okay, so they didn't screw up their equations, but they still came to the wrong conclusion. Logic based on false premises is still faulty logic, it doesn't really matter a whole hell of a lot if they're logic is "technically correct" unless you're just being a bureaucrat or something.


The whole topic is discussing the reapers reasoning. What are you trying to argue? Of course they are going to throw you off with their conclusion. They wouldn't be good antagonists if you agreed with their conclusion. Faulty logic isn't the result of false premises, it's the result of faulty reasoning. That's the whole point of discussion. I'm not being a "bureaucrat" (do you even know what that means?), not pedantic. I'm arguing that while their conclusion may be wrong (which is why we are fighting them in the first place) it's not the result of faulty or circular reasoning.


Golf clap. And this matters...why?


You're questioning the impact of a game topic discussion on an internet forum? Are you serious? *facepalm*


You're right, you just need an outlet for all that pent-up condescension and self-congratulation. What WAS I thinking.

#240
tersidre

tersidre
  • Members
  • 77 messages
you think it would be easier on the reapers if they used an envoy like sov was in mass effect 1 to just come out and say "yo if you keep doin what your doing im going to have to end you" if they comply thats great mission accomplished if not... well then call uncle harby over and make good on the promise then hope for a better outcome next time.

#241
Siansonea

Siansonea
  • Members
  • 7 281 messages

CaptainZaysh wrote...

Siansonea II wrote...

And why is it a foregone conclusion that "the created always rebel against their creators" and want to kill them? Why is the inevitable progression of AIs to destroy organics? Why should they even care about organics at all? This just goes to show the writer's buying into the paranoia that pervades people's perception of AI, because they anthropomorphize AI motivation. Advanced AIs would probably not give two sh¡ts about organics and what they do or don't do, more than likely they'd simply ignore them. It's too easy to carve out your own niche in the galaxy, especially since AIs wouldn't need the same resources that organics need. All they need is energy. They don't need water, and air and food, and all that garden world business.


Existential risk

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." (Eliezer Yudkowsky) [22]

Superhuman intelligences may have goals inconsistent with human survival and prosperity. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. In the same way that evolution has no inherent tendency to produce outcomes valued by humans, so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, such that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility;[57][58][59] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[60]) AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[54][61] and humans would be powerless to stop them.[62]Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Alternatively, AIs developed under evolutionary pressure to promote their own survival could out-compete humanity.[56] One approach to prevent a negative singularity is an AI box, whereby the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. Such a box would have extremely proscribed inputs and outputs; maybe only a plaintext channel. However, a sufficiently intelligent AI may simply be able to escape from any box we can create. For example, it might crack the protein folding problem and use nanotechnology to escape, or simply persuade its human 'keepers' to let it out.[22][63][64]Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that if the first real AI was friendly it would have a head start on self-improvement and thus prevent other unfriendly AIs from developing, as well as providing enormous benefits to mankind.[55] The Singularity Institute for Artificial Intelligence is dedicated to this cause.A significant problem, however, is that unfriendly artificial intelligence is likely to be much easier to create than FAI: while both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI will transform itself into something unfriendly) and a goal structure that aligns with human values and doesn’t automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which doesn't need to be invariant under self-modification.[65]Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines.


Ugh, organics. Let the synthetics win.

#242
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

tersidre wrote...

you think it would be easier on the reapers if they used an envoy like sov was in mass effect 1 to just come out and say "yo if you keep doin what your doing im going to have to end you" if they comply thats great mission accomplished if not... well then call uncle harby over and make good on the promise then hope for a better outcome next time.


Nah.  It'd need to somehow track every AI lab in the galaxy.  Easier just to wipe the organics out before they get to the dangerous tech level than try to police them for the rest of eternity.

(Personally I suspect the Catalyst's mission was originally to police his creator race for just this, and it invented the Reaper thing on its own initiative.)

#243
OchreJelly

OchreJelly
  • Members
  • 595 messages
Definitely, those who claim the Reapers destroy '*All* organic life' are misstating what the Catalyst's reasoning was.

And yet, because of the plot shift from "unknowable, unfathomable" reasons to "we're helping lesser life by culling advanced life" is problematic because the Reapers have provided the galaxy the main means (advanced tech, mass relays, etc.) to rapidly cause and propagate the exact situation they supposedly are here to prevent. In that case they *Are* being circular.(Nevermind that they are incredibly hypocritical to help the geth all the way back in game 1.)

It's like providing constant SuperGrow feed to plants then chopping them down when they dare to grow too much.

Plus none of the solutions that the Crucible-changed Catalyst presents to Shepard solve the problem it presents to us. In all three situations organics could eventually still build advanced synthetics to wipe out the entire galaxy. Heck they could just kill eachother off without synthetics.

And on top of that, with the Control and Synthetic option the Reapers could return when things don't work out so hot a million years down the line.

So, problem not solved at all. But this is all really because there was not enough thinking about the ramifications of changing the ending plot of game 3 in context of the whole ME setting and storyline.

Modifié par OchreJelly, 26 mars 2012 - 05:28 .


#244
CaptainZaysh

CaptainZaysh
  • Members
  • 2 603 messages

Siansonea II wrote...

Ugh, organics. Let the synthetics win.


Well, you did ask.  :P

#245
sydranark

sydranark
  • Members
  • 722 messages

Lugaidster wrote...

That's a pretty bad analogy. For once, the reapers aren't killing you, not in their eyes. 

 

Yes they are. They are eliminating you as a threat to other primitive lifeforms.

Lugaidster wrote... 
You are regarding the reasoning as dumb from your eyes, but you are not checking the reasoning itself, you're checking the premises. The premises might be dumb, but the reasoning is not. You can make a valid conclusion from a false premise 


Did you mean you can make a valid argument from false premises? Yes; however, you can't make a sound argument. Soundness requires 2 things: validity, and all premises are true. 

P1 (False):
All advanced civilizations make synthetics
P2 (False):
All synthetics kill all life
C (False):
All advanced civilizations kill all life

This argument is technically valid, but both of its premises and conclusion are false. Therefore, the logic isn't sound. Therefore, the logic is dumb. =/

#246
Huami

Huami
  • Members
  • 51 messages
why in the world would a self aware ai kill all organic life when the awareness in itself is based on organic awareness and morals??? Look at the evolution of edi and the geth and tell me they developed awareness to commit genocide? That beats true awareness... because the evolution of A.I. is enlightenment, if they perceive it otherwise, then it isn't true awareness and actualization, but a mere reflection that the ai is still an ai bound by its primitive senses... so... FACEPALM x 10^999999 bioware...

#247
Lugaidster

Lugaidster
  • Members
  • 1 222 messages

OchreJelly wrote...

And yet, because of the plot shift from "unknowable, unfathomable" reasons to "we're helping lesser life by culling advanced life" is problematic because the Reapers have provided the galaxy the main means to rapidly cause the exact situation (advanced tech, mass relays, etc.) they supposedly are here to prevent. In that case they *Are* being circular.(Nevermind that they are incredibly hypocritical to help the geth all the way back in game 1.)


That's not correct. They put you on a more predictable path. Their premise is that you will create synthetic life regardless. So by creating the citadel and the relays, they are putting certain constraints into how you evolve, making it easier for them to reap us later. Think of it this way, if the Quarians weren't space-faring, the Geth would've killed them all. 

Furthermore, they offered the Geth a means to an end in ME1, and in return, the Geth provided them with an army. That goes to show how dangerous AI can be. I think that given the events in ME1, synthetic life is also either ascended or destroyed.

OchreJelly wrote... 

Plus none of the solutions that the Crucible-changed Catalyst presents to Shepard solve the problem it presents to us. In all three situations organics could eventually still build advanced synthetics to wipe out the entire galaxy. Heck they could just kill eachother off without synthetics.

And on top of that, with the Control and Synthetic option the Reapers could return when things don't work out so hot a million years down the line.

So, problem not solved at all. But this is all really because there was not enough thinking about the ramifications of changing the ending plot of game 3 in context of the whole ME setting and storyline.


That only goes to show that no solution is perfect. You can have good reasoning and arrive to an acceptable, yet not perfect, solution. The Crucible just provided the catalyst with a new solution to his problem. If he was created with a simple goal (prevent organics from extinguishing themselves) and the best thing he could come up with is reaping advanced organics every 50000 years or so by "ascending" them into reaper form and leaving the primitive ones alone, while gruesome and revolting, it is a logical solution. He's an AI after all.

#248
General User

General User
  • Members
  • 3 315 messages

CaptainZaysh wrote...

General User wrote...

Pretty dang.  Unless the Egyptians were entirely overlooked by or unknown to the robots or something of that nature.


Yeah but even then, the enemy they'd have to defeat would have been improving its intelligence beyond human capabilities for thousands or perhaps millions of years!  It's not like Ancient Egypt would have to rise up and be powerful enough to defeat the Geth.  Nor even the Reapers!  Those guys are like the Mark I prototypes of the Mark 10,000,000,000 Machine Devils they'd actually have to fight.

That reminds me of something Javik said.  It was something along the lines of "We thought we had conquered the machines and that we were the lords and masters of the whole enchilada.  But it was only later that we learned that the machines had passed us long ago, in ways we could not possibly imagine."

The thing is, there's no reason I can see that that knife couldn't cut both ways.  Less likely?  Sure.  Rarer?  Very much so.  But unlikely and rare things happen to someone all the time.  And, on a long enough time line, they'll happen to you too.

I think that's really a major flaw in the Reaper "Solution": it isn't really a solution at all!  Even if the problem it was meant to solve did exist in the first place, all the Reapers are doing is halting the problem at a certain level with no way or means of moving forward or redirecting.

#249
SimKoning

SimKoning
  • Members
  • 618 messages
One of the things people are missing is the fact that not all synthetic life would be the same, in fact it could become more diverse than biological life given enough time. One of the potential byproducts of synthetic life could be plagues of non sapient self replicators. A few million little machines the size of a coke can could use the magnetosphere of a gas giant to propel themselves at relativistic speeds to other nearby stars. Once there, they could replicate, eventually converting the entire solar system into one giant machine, before propelling billions of "spores" to yet more stars. After a few million years of this, every star system in the galaxy would be infested with these machines, and complex organic life would never have a chance to evolve. More importantly, the Reapers would be pretty much screwed as well. So, yes the Reapers have self preservation in mind, but is it any surprise that the Catalyst is spinning it in a way to make it sound like they are the saviors of the galaxy? Especially when Shepard is standing right next to a super weapon that could wipe them all out?

Bioware's primary mistake is they left too much up to the imagination of the player. You can't expect everyone to rationalize all this out with such scant information. I'm not even sure if I have it right. You did something wrong as a writer if you create this much confusion at the end of a trilogy.

Modifié par SimKoning, 26 mars 2012 - 05:41 .


#250
Huami

Huami
  • Members
  • 51 messages
the solution is never violence, the solution is always peace and harmony, it's really easy... c'mon... all you gotta do is eat, sleep, drink, ****, ******, and expand existence through knowledge and intelligence. Why in the Hell would synthetics kill all organic life if they've achieved the awareness level of organics? They should be living Like Organics... with rules and morals and modes of conduct to live peacefully and harmoniously with all other species - the capability to adapt and reassess their decision making just like how edi changed her value for joker's life above her own and any other crew member besides shepard...

If they've (the ai) surpassed the awareness level of any organic life form, then they should be able to present solutions to even the most difficult problems of organics... not genocide... that's the least enlightened and dumbest solution of all...