Data and EDI were both smart enough to find and override any kill switches, though.
I tend to look at these things the other way round. The atomic arms race gave us a pretty good shot at wiping humanity out entirely and yet when the Cuban Missile Crisis rolled around the people in power at the time suddenly scrambled to actual get along and work something out because they realized that humanity was kind of cool and worth keeping. Yet people still get killed today around the world because someone had a medieval ideology and a simple weapon.
Technology generally provides us tools with which to solve problems, and our biggest problems are people being stupid not technology going wrong. I guess AI messes with that because you could have a stupid person being technology going wrong but...
...this is why I hate hypotheticals.
You're such an optimist. I wouldn't have guessed.
I think you're right in that it won't ever be a problem of cataclysmic proportions. Even if we do ever manage to somehow create a fully sentient, self aware, adaptive form of artificial intelligence (which I don't think we will), it probably won't actually be as fixated on us as fiction usually depicts.





Retour en haut








