CaptainZaysh wrote...
78stonewobble wrote...
Can you give me a rational explanation for an AI wanting to waste, what amounts to trillions upon trillions of AI lives or years of lives of energy and materials on such a project?
Yes: by preventing advanced organic civilisation from recurring, it permanently removes the risk of being attacked by an advanced organic civilisatoin.
Also, the point of a Von Neumann probe is that it is self-replicating. Once you have built one, it then builds its own descendants using resources it finds. You only have to build the first generation; all the others build themselves. So the cost of the scheme is not nearly so high as you think.
In addition, the Von Neumann probes can serve other functions (such as surveillance) so it will probably send them out anyway. Why not arm them?
So: three rational reasons. Now, before you argue that the superintelligent AI would definitely not do that, I want to ask you a question back. Why is it you are so certain that you can accurately predict the actions and behaviours of an intelligence vastly superior to your own? Bear in mind you are risking everybody else in the galaxy's life on your ability to say with authority that you can do this. What makes you qualified to do so?
Note: We are not talking about the Reapers here. We are talking about the creation of a hypothetical advanced AI that the Reapers exist and work to prevent.
They specifically state all organics rather than just advanced organic civilisation. So a lot of non-threats get wiped out as well.
If the supposed AI can become impossibly powerfull (unbeatable) there is no reason to eradicate the advanced organic civilisation since they are simply no threat. This assumes that AI can develop higher technological levels than organic beings ever could.
Presumably this AI would be so powerfull that in itself it is a deterrent to any attack from advanced organic races.
If the supposed AI can only become very powerfull (but beatable) there is even more reason to instead develop a somewhat peacefull relationship with the varying advanced organic societies. Rather than engaging in a war it might not win. This kind of invalidates what the reaper/catalyst statements though.
This is like the Geth. They are certainly powerfull but with the threat of survival gone there is no need to waste ressources on a conflict.
Von Neumann probes do consume ressources and in time huge amounts of ressources. They are obviously, as you point out, the way to go. If you really want to clean the universe of every cell and piece of self replicating dna.
However... If you want something else. Like a giant AI video arcade (or whatever an AI would want?) you now have the galactic ressources available but minus all the ressources that went into a gazillion von neuman probes.
Indeed they might even spark conflict. Ressources are one of the few things even rational beings will fight over due to survival.
You are right that I have no experiences that come even close to begin describing how an incredibly powerfull AI would work, think, prioritise, feel?, be happy about or what not.
The only thing I CAN do is look at the experiences that I do have available and try to extrapolate and that means looking at humans. Faulty as we may be we do have a little logic and rationality (which is normally associated with AI's) and we do have feelings (which is not normally associated with AI). However, as you say, we don't know and thus both might be present in an advanced enough AI.
In general in humans (today) we don't see:
Very weak people/nations throw themselves at incredibly strong (relatively) people/nations to the point of annihilation. Eg. Luxemburg never declared war on the United States.
Incredibly strong people/nations sacrificing ressources to wipe out someone not a threat. Eg. The United States have never wiped out Luxemburg.
There are exceptions, throughout our history especially, but in hind sight we tend to view these people as being faulty in a way. They were crazy, irrational or had some kind of emotionally based value system that most people today reject.
Reasonably rational or wellbalanced and intelligent people don't run around killing other people and/or wipe out all ants in the world because they had ants in the kitchen.
An analogue, also to fate of everyone in the galaxy:
The life of you, your family, your friends or heck your nation depends on your next door neighbour not being a homicidal nutbag building a nuclear bomb in his spare time.
So why haven't you killed your neighbor? Or all neighbours anywhere, since they could be one too?
The, somewhat, rational reasons to not do this that might be extrapolated to an AI would be.
The odds are extremely low.
Most neighbours are nice people and might even be a positive thing to your existence.
You could go to prison or worse which is basically an end to your life.
Or it would make your own life worthless since, rather than killing you would much rather have spent the time and energy playing me3. Or in the geth case playing me3 multiplayer in their spanking new dysons sphere.
Modifié par 78stonewobble, 18 janvier 2013 - 11:30 .