"... it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become 'the destroyer of worlds,' as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?"
Hintze writes about his research in The Conversation along with the top 4 fears he has about Artificial Intelligence and what AI will become:
"As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of. But we do need to decide what the desired outcome of advanced AI is."
And he writes, we must find the answer to this question: "Why should a superintelligence keep us around?"
For the sake of humans, let's hope there's a good answer to that question.
[RELATED: Futurist Steve Brown presenting at a SecureWorld conference on AI and the Rise of the Robots]