@78stonewobble:
I'm not sure how to respond to some of that because - again, no offense - some of it makes no sense IMHO. But here goes:
1. Umm, what? If the Reaper mandate relates only to life in the Milky Way, how can they possibly fail in respect of other galaxies? You can't fail at something you aren't even trying to accomplish. That's like saying you failed at the 200m breaststroke when you only entered the 50m freestyle.
2. This is way too narrow minded. C'mon. We're talking millions of years of both organic and synthetic evolution here. Think about the possibilities for repeated conflict between those synthetics and organics, even where the AIs don't start out wanting it on a galactic scale. Here's a scenario that could happen in just a few decades: organic species 1 creates slave AI species 1, which rebels and wipes out large chunk of organic species 1 (e.g. Quarians and Geth). Organic species 1 has alliances with organic species 2-5 (unlike the Quarians) and calls on their aid to retake lost worlds. AI species 1 reproduces itself en masse for protection. Meanwhile, tensions escalate between AI species 2 and their creators, Organic species 4, having knowledge of the current conflict. This devolves into more conflict so the two AI species band together and suddenly you have a genuine organics vs synthetics war affecting a significant chunk of the galaxy. Over thousands or even millions of years is it really hard to imagine one or more AI species coming to the conclusion that organic life will always be a threat to their existence? Or even organic species ravaging entire worlds to try to get a leg up in an ongoing war?
3. See point 2. And it's not as if I'm imagining AIs would wipe out all organic life. There's something in between what the Reapers do (once ever 50k years or so) and scrubbing every planet clean.
4. This so called analogy just gets stranger. Apes eat fruit from trees, so that is evidence that they are a threat to humans? Or should the fruit rebel and wipe out the apes? Sorry that was snarky of me but it's hard to find a point here.
5. I think this also ties partly into point 2. Your argument about the size of the galaxy is a fair one, although you also have to factor in the immense time frame we are talking about. Over that time it's conceivable that AIs could spread a long way, still with the potential mentality that organics are a threat. Expansion is normal and doesn't have to come rapidly when we're dealing in millennia.
6. See above, and nowhere did I say or imply stupidity on the part if the AIs.
7. Not sure where you're getting that 'evulz' stuff from. I've already pointed out other reasons why AIs might be hostile to sentient organics.
8. Ok, there's no need to be a d*ck. I'm being open to the possibility that it's about more than just good AI or bad AI; there are conceivable reasons why synthetic life could, over many millennia, become a threat to the development of organic life - self preservation being one. You're the one being narrow minded about how AI might develop and pulling assumptions out of nowhere (e.g. the reapers should be concerned with other galaxies, AIs could have no reason whatsoever to want to wipe out organic species...). Just look at our respective language. I'm talking about possibilities and potential scenarios, while you're just trying to tell me what couldn't possibly happen. If you're looking for someone worse than a flat earth believer, look in the mirror.
But all of this is getting off track. As far as I'm concerned, I've made a decent argument for the premise of this thread. If you don't want to even try to see things from another perspective or open your mind to other possibilities then I can't make you do it.
1. The mandate to protect organic life in the milkyway, can only be fullfilled, if the reapers can guarantee that no AI, capable of destroying all organic life in the milky way (even with the reapers around), can develop, within reach of the milky way. Considering ai can have a lifespan on the order of billions of years and FTL exists, that necessitates the continual reapings of pretty much the entire universe.
If they don't reap the entire universe, they're practically doing 0,00000000001 percent of the work necessary for their mandate and betting that what's supposedly happened countless times in just the milkyway will never happen in something like 99.999.999.999 other places.
2. If genocidal AI is as likely to happen as the catalyst claims, it would surely have come into existence somewhere in the universe and be on it's way with a vengeance and only be a matter of time.
That, that hasn't happened is a testament to the unlikelyhood of it happening in the first place. Presumably talking about it being a 1 in a trillion trillion risk. This is the basis for the catalysts mandate and the supposed validation for something like tens of thousands times a trillion lives lost.
I'm not saying that conflict cannot happen between AI and organic life. I'm saying that it's allmost impossible for such a conflict to be worse than the reapings are in terms of loss of life. Ie. A world war is bad (ai wiping out organics or organics wiping out ai), a continual world war lasting 2 billion years with only enough time inbetween to replenish losses and start all over again Is much much worse.
How much an AI would worry about organics is proportional to it's "power". Are organics in the milkyway a problem? It might be, if you, as an ai, want a chunk of the milky way and is only comparable in power. If you have a the galaxy of andromeda to yourself, is organics in the milkyway a problem? Not so much.... What if an AI, commands the power and ressources of just a billion galaxies... Why would it care even one bit about organics flourishing in the milkyway?
At some point it's as unlikely as the US considering the vatican a military threat.
3. Nevertheless, both are an incredible waste of ressources on something exceedingly unlikely. It make's it seem much more attractive to just create one friendly, but exceedingly powerfull ai, which could snuff other ai's out of existence on our behalf.
4. You're right, it is hard to find analogies for something this incredulous. Any kind of Rube goldberg like scheme blown to unimaginably large proportions due to an exceedingly unlikely event.
5. You are right. It's exactly why reaping 1 galaxy cannot ensure it's safety and as I said, is a testament to the sheer unlikelyhood of mad, all powerfull ai's.
Think of a modified drake equation. Odds of the organics of the milkyway getting wiped out by a mad genocidal all powerfull AI = size of universe times chance of any mad genocidal all powerfull ai (very likely according to the catalyst) times time.
We're still here... 13,7 billion years later, the universe haven't gotten smaller... The variable that's shrunk infinitely small is chance of mad genocidal all powerfull ai.
6. But it would be incredibly stupid expend an enourmous amount of ressources and energy beyond the point of self defence. Any human could make the same assumptions about any number of other humans (they're a threat, waste of space and what not). How many people actually try do something about it on a massive scale and how many succeed on that massive scale and how would it ever be worth it? Atleast from the perspective of a somewhat sane person.
7. Yes, they might be hostile... but at some point the effort expended on hostility makes it a wasted effort and increasingly irrational. Hostility to the point of genocide, is as irrational for ai's as it is as irrational for us. And not just one genocide, but continiously over the remainder of the lifetime of the universe and in every part of the universe. I can hardly imagine creating a worse hell for yourself, than that. Surely even AI's have better things to spend their time and energy on? I hear computing pi is okayish.
8. Sorry I didn't meant to be an ass. I have no idea of how an AI works or thinks, much less a 1000 or a million of them, however I think it's important to keeo the sheer magnitude of some it in perspective, or atleast I, personally, cannot gloss over it.
As I've said in other threads. I can easily buy into the prospect of crazy or, in some way, broken AI's. Crazy does not have to make logical sense to the rest of us, but can follow it's own logic.
But I would not expect a reasonably intelligent and rational AI to be any more of a threat to me, than any number of reasonably intelligent and rational neighbours. Such a scenario would offcourse be much more boring than anything where we have to outrun or fight a homicidal maniac. 
What you've said, throughout the thread, does make sense within the confines of the game and is perfectly valid within that.
In any case my beef is not with you, it's with the writers who expect me to ignore the existence and size and scope of the universe and the consequences of that on any reasoning given in the game. The disconnect between what I know and what the game claims is simply too big for my tastes and that is certainly not your fault.