Orange Tee wrote...
Circular logic - "A method of false logic by which 'this is used to prove that, and that is used to prove this'; also called circular reasoning."
So killing organics that create synthetics, with synthetics created by organics just isn't circular logic at all right? Not at all? You're sure? You know how ridiculously close that sounds to circular logi- ok fine. Have it your way. I'll just be over here. Shaking my head.
You're thinking like an organic - or more precisely like a human being. You're not thinking like a computer. I doubt this was done deliberately, but the solution touches on two major issues (among others) AI research is facing, and that is reasonable generalization and reasonable limitation.
Take this godchild/AI as an example. Let's say at some point in the distant past someone created this AI, fed it with information and then said something like: "You know, organics have this dreadful tendency to create synthetics, who will rebel against their masters and kill of all organics. Just do something about it and make this problem go away, will you?"
So whatever the process is behind it, the AI comes up with the idea to create synthetics itself to regularly purge all advanced organic life from the galaxy. (Purge all organic life? Nah, too much effort and too much bleeding life in the galaxy - you never get finished with one). This way the organics will never get into the situation where they can be killed off by the synthtices they have created, because the synthetics have been created by the AI. Problem solved. Forget for a second where the resources may have come from and that the 50k year cycle is not something a computer would do.
Any organic, or human, take your pick, will immediately cry BS on that. But for a computer this is a perfectly valid solution. When we get a problem like this one, we automatically imply certain restrictions. In this case, we firstly go in and imply that the person asking us to do something about this problem quite likely want the organics to survive. We generalize: Because the person tells us that he doesn't want to have the organics killed by synthetics, we assume he does not want to kill them at all. So any solution that would kill off the organics is automatically invalidated, because we want the organics to survive. So our options are suddenly limited.
The point is that we still apply those limitations fully automatic even when the person basically has told us that there are no limits ("Just do something about and make this problem go away") We would quite likely interpret that that we have unlimited resources, but not that we can kill off the organics.
Sounds easy enough, but the problem is that we have no clue how we do it, or what "reasonable" actually is. As long as we don't know that, we do have some trouble a computer/VI/AI to imitate that behaviour. Having followed the AI discussion for quite some time now, I sort of doubt that we will ever be able to do this.
Without these mechanisms, and therefore without any limitations, a computer can come up with solutions that will sound wildly unlogical to us, but are inherently logical.
Having said that, I really got stumped when the godchild said that Shep's presence invalidated its solution. I was going "What?". Shep's presence is may be a danger to the solution of the godchild, but it does not invalidate it.
Modifié par Dreamdancer, 26 mars 2012 - 08:44 .