Yesterday, CircleBack’s chief scientist Dr. Tim Oates was featured on techCrunch in a discussion of why we shouldn’t fear artificial intelligence. Dr. Oates, who holds a PhD in computer science with a focus in machine learning and artificial intelligence, explained–at length–the problems surrounding the celebrity intellectual “outcry” against AI, namely the assumption that artificial intelligence could step outside its initial programming and, in turn, become the “demon out of the box” that Elon Musk has warned about.
For Dr. Oates, for everyone at CircleBack, the problem of this “doomsday” AI scenario is this: that four highly-improbable things would have to occur in unison for AI to be as catastrophic as the anti-AI movement would like to believe. We would have to see:
- AI develop an “I,” a sense of self distinct from others.
- AI possessing the intellectual capacity to step outside of the boundaries of its intended purpose and programming to form radically new goals for itself (the “I”).
- AI choosing a plan to achieve those goals from a possibly enormous set of effective plans involving lots of death and destruction.
- AI having access to resources on a global scale to carry out the plan.
Any one of these, Dr. Oates argues, is highly unlikely. But for them to occur simultaneously? Nearly impossible.
“An AI’s ‘thinking’ occurs in one of two ways,” Dr. Oates explains, “either narrowly (i.e. about one thing) or generally. Narrow AIs like Deep Blue are incredible at, say, playing chess but can be beat by toddlers at checkers while general AIs, like the ones developed for the AAAI general game playing challenge can learn a wide variety of things but very poorly.”
This is the problem this doomsday thinking, Dr. Oates argues, that we have no reason to believe that an AI could ever gain a “superintelligence” outside its programming. And if it did?
“Would this superhuman intelligence inherently go nuclear, or would it likely just slack off a little at work or, in extreme cases, compose rap music in Latin?,” Dr. Oates says in his article. “In a world filled with a nearly infinite number of things a thinking entity can do to placate itself, it’s unlikely ‘destruction of humanity’ will top any AI’s list.”
Read the whole article at techCrunch, and join us in welcoming a new age of thinking on artificial intelligence that isn’t strangled by Hollywood fantasy!