Elon Musk, founder and CEO of SpaceX, and famed theoretical physicist Stephen Hawking believe that artificial intelligence is “our biggest existential threat”—that unchecked development of AI could potentially annihilate humanity. Plenty of movies have been made about such a proposed outcome—The Terminator, Blade Runner, and I, Robot, as well as the more recent Ex Machina. The question is whether computers, which are capable of performing calculations and assembling things with far more speed and accuracy than humans, are also capable of developing greater intelligence than humans.

Read more about the four hierarchical types of AI and how likely they are to achieve superintelligence