Abstract—This special issue is intended to introduce the state-of-the-art, open research challenges, new solutions, and applications for intelligence in the cloud computing. Specifically, we choose three high-quality papers for this special issue, covering different aspects of resource management, machine learning based framework, and a Blockchain-based mechanism that enables intelligent cloud computing.
Keywords—artificial intelligence; GPU; tensor processing units; intelligence in the cloud
Artificial intelligence (AI), since its birth in the 1950s, has been believed to bethe key to our civilization’s brightest future. To pursue the vision of AI, various machine learning approaches, e.g., deep learning, supervised learning, unsupervised learning, reinforcement learning, etc., have been proposed. The coming big data era has enthusiastically renewed the call and focus for advanced machine learning technologies to extract knowledge from large data pools. With its rich resource provisioning, cloud computing is widely regarded as an ideal platform to facilitate resource-intensive machine learning so as to enable intelligence in the cloud. Integrating intelligence into the cloud is without doubt a promising development trend to both cloud computing and AI.
In hardware support for providing intelligence in the cloud, many companies have been designing specialized chips for AI, especially for neural networks, because powerful computation is the key of AI. These chips are widely used in every corner of clouds from cores to edges. GPUs and tensor processing units (TPUs) are the two most powerful AI chips. In 2017, NVIDIA released the Tesla V100 GPU with new Volta architecture, which particularly incorporates Tensor Cores into streaming multiprocessors as well as other general improvements. In the same year, Google announced TPU 2.0 (Cloud TPU) aiming to connect the tensor-specific computation into larger systems, e.g., the Google Compute Engine. For mobile-edge devices, Apple released the Apple Neural Engine, a module of System-on-Chip (SoC), to process AI tasks. Qualcomm and Huawei also designed their own AI modules in SoC. In software support, many deep learning frameworks have been deployed to the cloud, such as Tensorflow (Google), Caffe2 (Facebook), CNTK (Microsoft), MXNet (Amazon), Deeplearning4j, etc. Deeplearning4j can be integrated with Hadoop and Spark and is designed to be used in business environments on distributed GPUs and CPUs.
On the other hand, AI techniques have also been widely used in resource management in the cloud. For example, reinforcement learning is used to improve job scheduling for spark streaming. Knowledge-Defined Networking is proposed as a new paradigm that accommodates and exploits Software-Defined Networking, Network Analytics, and AI for datacenter networking by extracting knowledge from network logs.
This special issue is intended to introduce the state-of-the-art, open research challenges, new solutions, and applications for intelligence in the cloud computing. Specifically, we choose three high-quality papers for this special issue, covering different aspects of resource management, machine learning based framework, and a Blockchain-based mechanism that enables intelligent cloud computing.
We are still at the early stage of integrating intelligence into cloud computing. The selected articles in this special issue show us a snapshot of recent developments in this area and help our readers identify other challenges to be addressed by both the research community and industry. Finally, we would like to acknowledge the great support from Mazin Yousif, the current Editor-in-Chief of IEEE Cloud Computing magazine, Beverly Lindeen, Managing Editor of IEEE Cloud Computing magazine, and other IEEE Computer Society publication staff.