As far as I understand you know, you mean that you don't think that any kind of scientific prediction could be extrapolated to astronomical scales, e.g. to a far far future, so there are no bounds to speculation. Well, physicists have a lot of fun to speculate about the whole universe based on their theories
, but they usually don't speculate about life
.
I have no problem with extrapolating to astronomical scales or with predictions of the far future. One just has to acknowledge the limits of our perceptions within reality. That perception (and with it the understanding) is growing further and wider as we develop tools to measure reality around us in scale, time and spectrum.
An example: A few hundred years ago, our perception was confined pretty much to whatever happened on this planet and any celestial body that was visible in the night sky with the bare eye. We could perceive the visible spectrum of electromagnetic waves and the audible spectrum of sound, etc. Based on this perception, people came up with theories and models of how the world works. A lot of them were surprisingly accurate from our perspective today, others seem quaint and maybe even a bit silly today, even if they were based on perfectly sound reasoning at the time.
Today, we have developed tools, that allow us to perceive and measure whole different range of scales, from as tiny as the subatomic to the span of galaxies (I don't even want to get started on cosmic microwave background radiation). In terms of spectra, we can pick up everything from radio waves to gamma rays. In terms of time, our predictions and theories also grow in scale and hopefully in accuracy. With these discoveries come new insights about the nature of our reality. Think of the emergence of ,say, quantum mechanics in the 20th century. Yet, I would hardly go so far as to say that our perspective now is any more complete than the one of the guys who studied nature a few hundred years ago.
Thus, our theories about the cosmos are great, I'd be the last to dismiss them but one has to acknowledge that they are based on our current understanding of reality and any reputable scientist would have to consider the possibility that they are inaccurate at some level, be that 1 billion years in the future or 100 trillion or however many you want. These theories are important for our scientific progress but they IMO, they cannot easily be used to dismiss the possibility of the occurrence of specific events.
Oh, and these guys do speculate about the far future of life a lot(see Ruediger Vaas for example) or life (or rather information theory) on a cosmic scale. One personal favorite of mine being the Boltzmann Brain. As you can imagine, I love that concept. ![]()
But, to make sense of the catalyst, we can (and probably should) restrain ourselves to the bounds of the ME universe with respect to space and time.
Can we now, after we have taken this discussion that far? Seems a bit like an arbitrary constraint at this point.
Well, if we would, then I guess what you say below goes, the catalyst would probably say that under current parameters and according to its observations so far, the probability of Synthetic life destroying organic life is the highest vs. all other scenarios he ran in his simulations or whatever and thus, he goes with the cycles.
Maybe it would say that it used a version of Asimov's psychohistory (https://en.wikipedia...ory_(fictional)) to come to the conclusion that its prediction has a very high possibility within the time frame of a couple of tens of thousand years after a sentient species evolves and creates a civilization.
Its database consists of the civilizations that existed during the Leviathan imperium. It has not updated its database because it destroyed/harvested all civilizations before their evolution could become relevant to a re-evaluation of the hypothesis. There is a hole: obviously the catalyst lets civilizations evolve way beyond the point where they can built synthetics that are capable to destroy all life. But it explains why it did never question its prediction/hypothesis after the first cycle, ever.
So here we have a (semi-) rational (pseudo-) scientific conclusion based on the evidence it had (existing civilizations in the Milky Way) extrapolated in a (semi-) comprehensible way to the evolution of future civilizations within a reasonable time frame.
If we asked it "how do you know that organics won't come up with a synthesis solution themselves, before they get destroyed?" it could answer that its psychohistoric extrapolations show that the wisdom to understand the necessity of this solution is not achieved, with high probability, before a cataclysmic event prevents the solution from being executed.
The catalyst could define its own goal to help life to overcome the critical phase where it destroys itself and evolve into a state where this no longer is a danger. If we re-interpret "synthesis" in a way that all sentient beings are unified into a hive mind that loves and preserves itself, this would indeed be a state of being where no further conflicts would occur.
That's not the way synthethis is presented in ME:3, but then that presentation is obviously crap.
I still don't know how it could possibly come to the conclusion that it will always be synthetics that...For me, it would be much much more convincing if it simply stated that there is a great danger that organics will somehow destroy all life including themselves in the galaxy.
P.S.: Obviously the catalyst thinks that it will always be synthetics, because the writers mistook the organics versus synthetics conflict for a central theme of ME:3. I was looking for an in-universe explanation. There isn't one.
I think it would work but the catalyst would still have to work with probabilities, not facts (which it might). In that case, though, neither the Leviathans nor the catalyst itself are very good at describing what it is they do. Also, ripping of Asimov AND Deus Ex? That's just sad. ![]()
If a computer is set a task and there is no solution to the task then it will do nothing since there is no action it can take that will accomplish the task.
If a more flexible intelligence is set a goal that can't be achieved in its entirety it's not unreasonable to conclude that it would try to do the best it can instead. It can't preserve life indefinitely without finding a way to prevent the inevitable death of the Universe but it can at least attempt to preserve it for as long as it can.
Alright, here is a task:
new int n = 3;
while n > 2 {
if n < 2 {
print{"This problem is solved!"};
return
}
else {
print("The cycle continues.");
n++;
}
}
What would a computer do?
What would you do?





Retour en haut








