Cyborg Intelligence: Recent Progress and Future Directions

Zhaohui Wu, Zhejiang University
Yongdi Zhou, East China Normal University
Zhongzhi Shi, Chinese Academy of Sciences
Changshui Zhang, Tsinghua University
Guanglin Li, Chinese Academy of Sciences
Xiaoxiang Zheng, Zhejiang University
Nenggan Zheng, Zhejiang University
Gang Pan, Zhejiang University

Pages: 44–50

Abstract—The combination of biological and artificial intelligence is a promising methodology to construct a novel intelligent modality, proposed as cyborg intelligence. The hierarchical conceptual framework is based on the interaction and combination of comparable components of biological cognitive units and computing intelligent units. The authors extend the previous conceptual framework and focus on sensorimotor circuits to explore the representation and integration of sensation. They then present a cognitive computing model for brain-computer integration and design efficient machine-learning algorithms for neural signal decoding. They also propose biological reconstruction methods for sensorimotor circuits to not only restore but enhance functionalities with AI. They develop a series of demonstrating systems to validate the conceptual framework of cyborg intelligence and possibly herald bright prospects and high values in diversified aspects of theoretical research, engineering techniques, and clinical applications. Finally, the authors summarize the latest research trends and challenges, which they believe will further boost new scientific frontiers in cyborg intelligence.

Keywords—cyborg intelligence; computational framework; multimodal sensory representation; cognitive model; machine learning; biological functionality reconstruction; brain machine integration


The interconnection and tight integration of biological information processing units and man-made computing components provides extensive information exchange between biological organisms and computing devices, which spawned cyborg intelligence.1 Artificial and biological intelligence begin to share common territory in providing sensation, perception, cognition, and locomotion. Many amazing results have been achieved in various areas, including animals as sensors2 and actuators,3 mind-controlled machines,46 neurochips,7,8 intelligent prosthesis,9 and neural rehabilitation.10,11 Recently, the open Cyborg Olympics were held to drive the realistic combination of biological and artificial intelligence and help disabled people better acclimate to daily life.12

Typical and effective approaches to implementing engineering systems and exploring research problems in cyborg intelligence are based on brain-computer (or neural-computer) integration methods. Using these methods, computers can record neural activity at multiple levels or scales, and thus decode brain representation of various functionalities and precisely control artificial or biological actuators. In recent decades, there have been continuous scientific breakthroughs regarding the directed information pathway from the brain to computers. Meanwhile, besides ordinary sensory feedback such as visual, auditory, tactile, and olfactory input, computers can now encode neural feedback as optical or electrical stimuli to modulate neural circuits directly. This forms the directed information pathway from the computer to the brain. These bidirectional information pathways make it possible to investigate the key problems in cyborg intelligence.

Although related research challenges cover the most fundamental problems in AI, physiology, and psychology, we focused on the computational architecture of cyborg intelligence, especially for sensorimotor processes in the brain-computer integration system. We first proposed a conceptual illustration and constructed a cognitive computational model for brain-computer integration systems. To understand neural representation in the brain, we also explored encoding and decoding principles underlying the sensorimotor loop, and then the computers implemented novel AI algorithms to enhance sensation and motor control functions of the overall brain-computer integration system. From our theoretical and technological results, we implemented cyborg rats, monkey mind control, and a rehabilitation demonstration as evaluating systems. Extensive experiments show that the philosophy concept and computation architecture of cyborg intelligence is promising for enhancing, repairing, or extending the intelligent capacity of both biological and computing units.

Conceptual Framework of Cyborg Intelligence: A Revisit

From the systematic perspective, a critical problem in cyborg intelligence research is how to merge the brain with the computer at various scales. On the basis of the similarity between brain function partition and corresponding computing counterparts, we presented a hierarchical and conceptual framework for cyborg intelligence.1,13,14 The biological part and computing counterparts are interconnected through information exchange, and then cooperate to generate sensation, perception, learning, memory, emotion, and other cognitive functions. We argue that this involves two key aspects: first, the cooperation between biological and AI units will output the functional units of cyborg intelligence, and second, the final form and paradigm of cyborg intelligence is determined not only by this interconnection and cooperation but also by the merging of biological and AI units.

For the sensorimotor process, our previous work abstracted the biological component of cyborg intelligence into three layers: perception and behavior, decision making, and memory and intention (see Figure 1). We also divided the AI functional units into three corresponding layers: sensor and actuator, task planning, and knowledge base and goal layers. We also defined two basic interaction and cooperation operations: homogeneous interaction (homoraction) and heterogeneous interaction (heteraction). The former represents information exchange and function recalls occurring in a single biological or computing component, whereas the latter indicates the operations between the function units of both biological and computing parts. Homoraction is also modeled as the relationship between units within the same part. In the case of a single part in a brain-computer integration system, it will reduce to a biological body or computing device just with homoraction inside. Consequently, verifying the existence of heteraction is necessary for cyborg intelligent systems.

Graphic: Hierarchical conceptual framework for cyborg intelligence.1 Three layers are abstracted to describe the interconnection between the biological organisms and computing machines.

Figure 1. Hierarchical conceptual framework for cyborg intelligence.1 Three layers are abstracted to describe the interconnection between the biological organisms and computing machines.

We believe that learning and memory units are fundamental for problem solving in the computational framework of cyborg intelligent systems. Biological learning paradigms (such as classical conditioning, operant learning, and insight learning) and learning rules (such as sequential and recursive patterns) are essential to generate adaptive behaviors, whereas AI algorithms enable computing devices to be smart to perform intelligent tasks. Cyborg intelligent systems interact with the environment to achieve better behavior performance, which requires the biological and AI units to adapt to each other within the system and adjust their actions according to changing environmental situations, thus possessing enhanced learning, memory, and problem-solving capabilities.15,16 Neural plasticity and machine learning cooperate and merge to represent the cyborg intelligent system's learning capacity, usually in a semisupervised or unsupervised manner.

Cyborg Intelligence: Research Progress

Here, we summarize our team's recent advances in cyborg intelligence. We denote these advances as numbers in the conceptual framework of cyborg intelligence in Figure 2, roughly indicating their positions in the framework. We denote the neural mechanism of the primary somatosensory cortex and primary motor cortex as “1” in the figure and the cross-model sensation fusion as “2.” The novel cognitive model for the machine parts is labeled as “3.” To bridge the biological and machine components, we presented robust neural decoding algorithms (4) and encoding approaches (5). Furthermore, to enhance the capability and performance of the complete cyborg system, we explored both biological (6) and artificial (7) reconstruction methods. This promoted the deep integration of neural circuits and computing components. On the basis of this computational architecture, we presented three demonstrative systems: sensation-augmented rat cyborgs (8), monkey hand-gesture decoding (9), and mind-controlled scissor games (10).

Graphic: Illustration of research progresses in the framework of cyborg intelligence.

Figure 2. Illustration of research progresses in the framework of cyborg intelligence.

Multimodal Sensory Information Fusion and Representation

We explored the neural mechanisms underlying multimodal sensory associations by how the large-scale sensorimotor network integrates multimodal sensory information and further generates precise motor-control output (1 and 2 in Figure 2). We investigated causal relationships between this complicated network and sensation/perception using two behavioral paradigms (tactile-visual and tactile-tactile). We found sufficient evidence indicating that there was a processing sequence in tactile-visual cross-modal associations and working memory from activation of the primary somatosensory cortex to activation of the posterior parietal cortex,17 and a cooperative relationship exists between these two cortical areas in tactile-tactile unimodal working memory.18 Specific classes of neurons in monkey dorsolateral prefrontal cortex (that is, sensation-coupled and motor-coupled neurons) were identified as playing a critical role in tactile-visual integration and decision making during performance of working memory tasks.19 Additionally, comparative behavior research coupled with functional magnetic resonance imaging (fMRI) of humans and monkeys suggested that the inferior frontal gyrus was likely responsible for the performance in complicated rule learning.20

Cognitive Computational Model in Brain-Machine Collaboration

Cooperation between a computer and the brain extends throughout environment sensing, motivation deliberation, and motor planning. We proposed a cognitive computation model, awareness-belief-goal-planning, for brain-computer integration systems (3 in Figure 2).21,22 In this model, we described the cooperative relationships and activities in cyborg intelligent systems using a multiagent methodology. The basic cognitive cycle in the model was predefined as “perception-motivation-motor planning.” Environmental situations detected by sensors or decoded from neural signals of sensation circuits in biological organisms were extracted to represent the current context state, producing awareness in the cognitive model. The motivation is created according to the cooperation needs between biological organisms and computing devices. Then the motivation generates motor planning decisions. While the behavior outputted by the agent and machines changed the outside world, feedback information was registered for entry into the next cognitive cycle.

Machine Learning-Based Methods for Brain Information Codecs

The direct interconnection between the biological organism and computer systems relies on the decoding and encoding information pathway to close interconnected loops at multiple levels and scales. From the biological parts to the computers, we proposed a multitask sparse learning model with a general iterative shrinkage and thresholding algorithm to decode neural signals in large-scale sensorimotor networks.23 This machine learning approach (4 in Figure 2) used feature sharing among tasks and designed nonconvex regularization to solve the technical challenges of neural decoding, such as individual difference, temporary variation, and high noises. The proposed method demonstrated good generalization performance in interesting computer vision and bioinformatics applications.

On the converse direct pathway, microelectrical and optical stimulation protocols were developed to give virtual tactile sensation and virtual reward.2426 Computers thus configured detailed parameters to encode this virtual sensation and reward information to deliver virtual feedback (for example, virtual thermal sensation) directly to the neural circuits (5 in Figure 2). The virtual reward encoding method makes it possible to develop various operational learning paradigms to construct a series of brain-computer integration systems.16,25,27

Motor Functionality Reconstruction

We developed a novel surgical method, Targeted Nerve Functional Replacement (TNFR), to reconstruct neural and myoelectric signals associated with various upper-limb functions including sensation, perception, and fine motor skill (6 in Figure 2).28 In the cyborg intelligence architecture, the sensors act as sensation devices that transmit encoded commands to a set of actuators and also deliver feedback to the afferent neural circuits. This mechanism can serve as a replacement for biological actuators and enhance the motor function (7 in Figure 2). For instance, an electrical stimulation array placed on the skin surface could induce various somatic sensations, including touch, pressure, warmth, and wetness, that would aid the provision of sensory feedback.29 Furthermore, artificial arms and fingers can be controlled not only by surface electromyogram, but also by neural signals.30,31 In a broader perspective, more biological signals with efficient interaction methods could enhance the control performance of extended artificial devices.

Sensation-Augmented Rat Cyborgs

As typical brain-computer integration systems of “animal as the actuators,” rat cyborgs, or ratbots (8 in Figure 2), were developed to validate how animals can be enhanced by AI. Ratbots are based on the biological platform of the rat with electrodes implanted in specific brain areas, such as the somatosensory cortex and reward area.26 These electrodes are connected to a backpack fixed on the rat that delivers electric stimuli to the rat's brain. For vison-enhanced ratbots, a minicamera is connected to the backpack to capture movement or the surrounding environment. A computer analyzes video stream input and generates stimulation parameters that are then wirelessly sent to the backpack stimulator to control the rat's navigation behavior by manipulating virtual sensation or reward. As Figure 3 shows, vision-enhanced ratbots can precisely find “human-interesting” objects – that is, human faces and arrow signs, identified by object-detection algorithms.25

Graphic: Vision-augmented rat cyborg. The (a) ratbot can (b) recognize image indicators and navigate complicated environments guided by commands from its computing components.

Figure 3. Vision-augmented rat cyborg. The (a) ratbot can (b) recognize image indicators and navigate complicated environments guided by commands from its computing components.

We also explored the automatic training method, using computers only to learn the intelligent training procedures and complete the ratbot training.32 Furthermore, with a speech-recognition interface, ratbots can be enhanced with speech understanding capacity and navigate according to human words.27

Monkey Hand-Gesture Decoding

To verify the brain-to-computer neural information path, we implemented a “mind-reader” demonstrating system in which a monkey brain controlled four gestures of a robotic hand—grabbing, hooking, holding, and pinching (9 in Figure 2; see also Figure 4). We implanted two 96-channel microelectrode arrays into both the premotor and primary motor cortex to record neuron population spiking activity. We identified the spatiotemporal representation of the different gestures and extracted the signal features using a fuzzy k-nearest-neighbor algorithm that mapped the spikes to a specific gesture with high accuracy.

Graphic: Robotic hand control with neural decoding. We used motor control commands decoded directly from neural spikes of the primary motor cortex to manipulate the robotic hand to demonstrate four gestures.

Figure 4. Robotic hand control with neural decoding. We used motor control commands decoded directly from neural spikes of the primary motor cortex to manipulate the robotic hand to demonstrate four gestures.

Furthermore, we designed a center-out behavior paradigm to investigate the neurons responsible for kinematic positions, acceleration, and velocity. A dual-sequential Monte Carlo adaptive-point process-filtering algorithm exploited the time-variant neural tuning function from spiking events.33 The short time tuning of individual neurons marginally contributed to the stable performance of brain-to-computer systems, validating the continuous trajectory control of the robotic arm by the monkey brain.

Mind-Controlled Rock-Paper-Scissors Game

We also evaluated the brain-to-computer neural information path in the human mind-control case (10 in Figure 2). We implemented an electrocorticogram (ECoG)-based brain-machine integration system to control a prosthetic hand, using brain signals from human participants who underwent invasive ECoG monitoring for seizure localization. We analyzed the spatiotemporal patterns of ECoG signals. We extracted the power spectrum of high gamma frequency components (80 to 120 Hz) to decode rock-paper-scissors hand movements, and we then controlled the prosthetic hand to perform the gestures (see Figure 5).

Graphic: Rock-paper-scissors game controlled by human electrocorticogram (ECoG) signals. (a) Hand gestures and their corresponding ECoG signals. (b) Live demonstration.

Figure 5. Rock-paper-scissors game controlled by human electrocorticogram (ECoG) signals. (a) Hand gestures and their corresponding ECoG signals. (b) Live demonstration.

By developing the demonstrative systems discussed here, we verified the basic concept and simplified the computational architecture of cyborg intelligence. Integrating biological and machine intelligence into a unified mode of intelligence is a challenging but promising goal. We have identified three future directions for cyborg intelligence.

The first direction is to develop a multimodal sensation integration mechanism and computational model in a large-scale association cortex. For neural representation in large-scale sensorimotor networks, casual and sequential relationships among the association cortex remain mostly unexplored. How cross-modal sensations are integrated in the temporary sequence in the association cortex and what their exact role is in motor planning are two of the most important scientific problems for sensorimotor research in cyborg intelligent systems. The network-based source imaging method using electroencephalogram and fMRI is promising to provide efficient tools to help researchers collect finer evidence to identify network dynamics. From a holistic view of sensorimotor functions, it is essential to build a complete circuit model to explain the information processing pathway for cross-modal sensation integration and the association motor cortex. Such research would promote new mechanisms and computing architectures in cognitive architectures. In terms of decoding algorithms, there is an emerging research trend to utilize prior knowledge of neuroscience principles to formulate neural information analytics. Furthermore, statistical machine learning methods leverage brain representation spatiotemporal structures to decode neural signals stably and accurately.

Another concern is the creation of a biologically plausible cognitive model. We presented the awareness-belief-goal-planning model to describe the coordination, cooperation, and merging relationship between biological and computing counterparts. This model was inspired by the multiagent framework and based on symbolic reasoning. Biologically plausible architecture combing neural networks with logical reasoning will revolutionize the cognitive model, especially in cyborg intelligent systems. For example, we need novel feature representation and integration mechanisms to generate awareness and other internal mental states, which requires end-to-end processing of both artificial sensor inputs and neural-spiking activity information. Deliberation mechanisms and policy-driven reasoning approaches should consider the implementation of the neural network and generalize to various cognitive tasks in complicated environments.

Finally, the co-adaption and co-learning between biological components and computing components play a critical role in merging cyborg intelligence. Neural circuits have remarkable plasticity at various levels, from the firing rates of single neurons to the dynamic structure of large-scale networks. Apart from exploring robust biological reconstruction of physiological functions, it is important to develop machine learning algorithms to extract neural representation patterns and build correct associations between cognitive goals. These mapping associations might expand knowledge frontiers on how neural circuits and computing components can be unified as a single cyborg intelligent component.

Acknowledgments

This work was supported by National Key Basic Research Program of China (973 program 2013CB329500). Direct correspondence and questions to Gang Pan at gpan@zju.edu.cn.

References



Zhaohui Wu is a professor in the College of Computer Science and Technology at Zhejiang University. Contact him at wzh@zju.edu.cn.
Yongdi Zhou is a professor in the Key Laboratory of Brain Functional Genomics at East China Normal University. Contact him at ydzhou.icn@gmail.com.
Zhongzhi Shi is a professor in the Key Laboratory of Intelligent Information Processing in the Institute of Computing Technology at the Chinese Academy of Sciences. Contact him at shizz@ics.ict.ac.cn.
Changshui Zhang is a professor in the Department of Automation at Tsinghua University. Contact him at zcs@mail.tsinghua.edu.cn.
Guanglin Li is a professor in the Shenzhen Institutes of Advanced Studies at the Chinese Academy of Sciences. Contact him at gl.li@siat.ac.cn.
Xiaoxiang Zheng is a professor in the Qiushi Academy for Advanced Studies at Zhejiang University and the Department of Biomedical Engineering at Zhejiang University. Contact her at zxx667@gmail.com.
Nenggan Zheng is an associate professor in the Qiushi Academy for Advanced Studies at Zhejiang University. Contact him at zng@cs.zju.edu.cn.
Gang Pan is a professor in the College of Computer Science and Technology at Zhejiang University. Contact him at gpan@zju.edu.cn.
FULL ARTICLE
CITATIONS
66 ms
(Ver 3.x)