Elon Musk, founder and CEO of SpaceX, and famed theoretical physicist Stephen Hawking believe that artificial intelligence is “our biggest existential threat”—that unchecked development of AI could potentially annihilate humanity. Plenty of movies have been made about such a proposed outcome—The Terminator, Blade Runner, and I, Robot, as well as the more recent Ex Machina. The question is whether computers, which are capable of performing calculations and assembling things with far more speed and accuracy than humans, are also capable of developing greater intelligence than humans.
Read more about the four hierarchical types of AI and how likely they are to achieve superintelligence (login may be required for full text)
About Lori Cameron
Lori Cameron is a Senior Writer for the IEEE Computer Society and currently writes regular features for Computer magazine, Computing Edge, and the Computing Now and Magazine Roundup websites. Contact her at firstname.lastname@example.org. Follow her on LinkedIn.