Before I start this topic, I hope everyone can approach this thread in a mature manner, and not derail it after a few pages. It's a fascinating topic, which I'm sure in the future will eventually become something of great debate.
So I got to thinking, as you do, if one day we were to create a fully aware, conscious artificial intelligence, would it be alive? Do you consider that to be a living thing, or simply an advanced machine? Surely for those of you who think so, what about rights? People often say that AIs could be a good thing so long as we have plenty of safeguards and off switches in place, as to remove risks.
But is that fair? Any person, be they organic or synthetic, should they have an button to simply turn them off because they might pose a risk to others? The default state of most forms of life is self preservation, the same can be said for synthetic ones. This has often been approached before as a topic in science fiction, but like I said, is yet to become a major topic of discussion in the real world.
Me, I don't really know what to think about it all at the moment, I would probably consider fully aware, conscious machine as a form of life, but not as we currently know it. But other than that, I'm not sure.
So what about you guys? Do you think AI research should all be completely banned, totally illegal, so that there is no risk? Or do you think we should embrace it? And reap the benefits that it may bring?





Retour en haut












