What type of motivation can you give an AI to follow these social norms and guidelines? We gladly conform into them for social acceptance, risk avoidance and mutual success. An AI shares none of those benefits inherently.
I'm not arguing that we need to give them motivation. I'm arguing that it's not as ethically "bad" to confine the AI as put forth by the OP, because of these confines.
Edit: Though I think risk avoidance could be very, very easily developed by an AI, if you told it that you would shut it off it performed some threatening action.
Thinking about that, I have to wonder what kind of opinion an AI would develop of humans. We automatically think it would hate humans, but why? What says an AI must automatically value computation speed and efficiency? I don't think there's any aspect of an AI's "personality" that would be set in stone by their existence, just like humans (within a few bounds).
Mightn't the AI develop a child-like adoration for their creators? My opinion of my parents isn't based on them being more powerful than me, whether they're smarter than me or not, or anything like that, but simply who they are (I don't have any adoration for my parents, but it's a hypothetical example).
If we're developing an AI along human lines, why would we assume it wouldn't develop along human lines? Or is our psyche so self-hating to think that anything we created would hate us? /psychology