Guest Editor's Introduction • Tiejun Huang • May 2016
Translations by Osvaldo Perez and Tiejun Huang
Listen to the Guest Editor's Introduction
Spanish (recorded by Martin Omana):
The IEEE Computer Society’s 2022 Report, which was released through the Computing Now site in 2014, presents insights from tech leaders to explore what our world might look like in 2022. Among its findings, the report predicts an integrated network of smart devices, which it calls “seamless intelligence,” that will be able to directly interface with our brain waves. This leaves a few logical follow-up questions regarding what computing technology is needed to make that vision a reality and how we can create machines that can implement generic intelligence similar to that of humans.
When it comes to high-speed calculating, logical reasoning, and precise memorizing, modern-day computers are clearly more powerful than humans, but they still rely on humans to provide the knowledge they have. AI is no exception. In traditional AI, the “intelligence” is explicitly expressed as knowledge bases and rules that the computer can handle. In cutting-edge AI, the machine “learns” the knowledge from a very large database and uses a learning model designed by humans. The intelligence model (designed by humans) driven by big data (captured from the world) is widely viewed as an effective paradigm for creating more powerful intelligence. This new paradigm is igniting AI fervor worldwide, but many challenges remain.
The May 2016 issue of Computing Now features six articles from the IEEE Computer Society Digital Library (CSDL) that were published in the past year. Although many other articles addressed specific technical issues in this field, I selected these articles because they offer broad overviews or inspire discussion on the exciting topic of brain-like computing.
Brain-like computing is not a well-defined term. For many people, “brain-like” means “does what the brain can do” (traditional AI) or “does as the brain does” (machine-learning AI). But, in fact, we don’t always know what the brain can do or how it works, because we don’t fully understand ourselves.
Unlike traditional and machine-learning approaches, which attempt to copy brain function, neuromorphic computing mimics the brain’s structure. By building an electronic brain with neuromorphic devices, this approach asserts that parsing the physiological structure of the brain (neurons, synapses, neural circuits, and the functional regions of the cortex) is easier than unveiling the principles of intelligence, and that a similar structure could generate similar functions. Whether brain-like, brain-inspired, or brain-mimicking, this field could fundamentally change current computing models and even our understanding of intelligence.
In this Issue
This month’s theme begins with two articles from the December 2015 “Rebooting Computing” issue of Computer, investigating the possibility of fundamentally changing the current computing model through neuromorphic computing. In “Architecting for Causal Intelligence at Nanoscale,” Santosh Khasanvis and his colleagues outline the advantages of randomness in conjunction with new neuromorphic programming methods. In “Ohmic Weave: Memristor-Based Threshold Gate Networks,” David J. Mountain and his colleagues describe how today’s Turing-derived machines could incorporate neuromorphic capabilities.
The EU-funded Human Brain Project now offers access to two recently completed, complementary neuromorphic machines for modeling neural microcircuits and applying brain-like principles in machine learning and cognitive computing: Karlheinz Meier’s BrainScaleS system based on physical neuromorphic devices and Steve Furber’s SpiNNaker system grounded on the Advanced RISC Machine (ARM) architecture. In a follow up to two previous papers on the SpiNNaker hardware build, Andrew D. Brown and his colleagues describe the (rather unusual) low-level foundation software developed to support the operation of the machine in "SpiNNaker — Programming Model.”
Machine learning based on neural networks (especially deep learning) is showing success across a broad range of applications. In "A High-Throughput Neural Network Accelerator,” Tianshi Chen and his colleagues describe how they designed an accelerator architecture for large-scale neural networks by minimizing memory transfers and performing them as efficiently as possible.
"HFirst: A Temporal Approach to Object Recognition,” by Garrick Orchard and his colleagues, introduces a spiking hierarchical model for object recognition that utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. This approach to visual computation represents a major paradigm shift from conventional clocked systems and could find application in other sensory modalities and computational tasks.
In the final article, we turn our attention to machine intelligence in robots. For a robot, the body enables discovery of the real world and its uncertain and unpredictable elements, which also makes it important in the emergence of intelligence. In "A Robot Learns How to Entice an Insect,” Ji-Hwan Son and Hyo-Sung Ahn demonstrate how a robot, using a camera to recognize a biological insect and its heading angle, can spread a specific odor to entice the insect to travel a given trajectory.
Meier and Furber introduced the BrainScaleS and SpiNNaker systems at the Human Brain Project's first Neuromorphic Computing Application Workshop in March 2016. The video is available here.
The Hierarchical Temporal Memory (HTM) principle, which Jeff Hawkins proposed in his famous book, On Intelligence, is still one of the most promising models on how the brain solves problems. Hawkins cofounded an innovative company called Numenta to build a suite of software operating on HTM principles. Available through the NuPIC open-source community, the software shows that a computing approach based on biological learning principles could make possible a new generation of capabilities. The company describes its foundations and exciting work in a pair of videos.
If you're preparing a paper on brain-like computing, you're also welcome to submit it to the special issue on Neuromorphic Computing and Cognitive Systems, to appear in IEEE Transactions on Cognitive and Developmental Systems.
From changing the classic computing model to revolutionizing robotics, brain-like computing is taking the center of the computing stage. We hope this issue of Computing Now inspires you to consider AI as an essential part of the future of computing technology. Please share your insights, ideas, and experiences below.
T. Huang, “Brain-Like Computing," Computing Now, vol.9, no.5, May. 2016, IEEE Computer Society [online]; https://www.computer.org/web/computingnow/archive/brain-like-computing-may-2016.
Tiejun Huang is a professor with the School of Electrical Engineering and Computer Science and the chair of the Department of Computer Science at Peking University, China. He was awarded the Distinguished Professor of the Chang Jiang Scholars Program by the Ministry of Education, as well as the Distinguished Young Scholar award by the Natural Science Foundation of China. Huang is a member of the Computing Now advisory board, serving as the regional representative for China and overseeing the Chinese translations. Contact him at email@example.com.