After seeing numerous vague, short articles in recent months about supposedly smart people (like Elon Musk, Bill Gates, etc) warning about the dangers of AI, I started to actually look into where we stand. I am currently reading:

and it is a good, somewhat informative read.
Basically, most of the "experts" think we are anywhere from 10-50 years away from AGI, or artificial general intelligence - an AI as smart as your average human. The problem then arises that since most think the only way we get to AGI is via "black box" recursive self-improving programming, we might go from AGI to ASI - artificial super intelligence - in months, weeks, or in a worst case scenario, hours without even realizing it. That is scary when you consider that an ASI could very easily be one thousand times smarter than the smartest human, and that gulf in capability is several hundred times greater than the gulf between humans and, say, a cat.