IEEE Transactions on Multi-Scale Computing Systems
From the April-June 2016 issue
Wearables, Implants, and Internet of Things: The Technology Needs in the Evolving Landscape
By Sandip Ray, Jongsun Park, and Swarup Bhunia
The proliferation of wearable and implantable computing devices in the recent years, and the emergence of the Internet of Things, have ushered in an era of computing characterized by an explosion in growth and diversification of computing platforms. Unfortunately, the traditional research silos in computing science and engineering appear inadequate for enabling and sustaining the requirements of this new computing era. This paper examines some of these key requirements, explains why the current computing abstractions and research silos are insufficient, and identifies several research challenges. The challenges cross-cut several computing disciplines, including programming languages, computer architecture, physical designs, security, algorithms, and analytics.
Editorials and Announcements
- We're pleased to announce that Partha Pratim Pande, professor at Washington State University, has accepted the position of inaugural Editor-in-Chief.
- Editorial (Jan-March 2016)
- Introduction to IEEE Transactions on Multiscale Computing Systems (TMSCS) (Jan-March 2015)
- Welcome Message (Jan-March 2015)
- Emerging Memory Technologies—Modeling, Design, and Applications for Multi-Scale Computing (July-Sept 2015)
- Wearables, Implants, and Internet of Things (April-June 2015)
Call for Papers
Special Issue on Accelerated Computing
Extended Submission Deadline: August 15, 2016. View PDF.
Accelerated Computing is a computing model whereby calculations are carried out on specialized hardware (known as accelerator) in tandem with traditional CPUs to achieve faster, lower-power, and even more reliable execution. Accelerators are highly specialized hardware components that can execute a specific functionality very efficiently. The specific functionality that can execute efficiently on the accelerator executes on the accelerator, while the remainder of the code continues to run on the CPU. Accelerators come in several forms. Hardware accelerators, e.g., LTE, GNSS, GSM modules found on modern smartphones are essentially ASICs that can accelerate a particular computing functionality very well, and will provide the best power and performance of the functionality possible. However, they are not programmable, and that limits their usage. There is a large variety of programmable accelerators, from the vector processors (e.g., Intel SSE, and PowerPC Altivec) to GPGPUs (General Purpose Graphics Processing Units). Programmable accelerators trade-off some power-efficiency for easier adoption, and higher re-usability. Field Programmable Gate Array or FPGAs, while primarily used for fast prototyping, are being used as programmable, and reconfigurable accelerators. Accelerators by design have a lot of computing resources, and therefore naturally provide computational redundancy. This can be used to provide fault tolerance through redundant computation.
While accelerated computing has been a topic of lot of extremely influential research, there are still a lot of issues that need further investigation to increase the effectiveness and efficiency of accelerated computing. Novel accelerator architectures are needed for emerging applications, e.g., deep neural networks, brain simulations etc. New programming models are needed to effectively and dynamically balance the workload among the computational resources in the accelerator, and also between the accelerator and the CPU. Of course the programming models should keep programming natural and easy; otherwise, it becomes hard to understand, debug and maintain code. Several models of communication between the CPU and accelerators have been developed. However what is the right model for a given application must be determined. Given a computing fabric with several CPUs and several accelerators, how to specify the application, how to divide and execute the application to achieve efficient computing is still an open challenge. This special issue aims at collating new research along all the dimensions of accelerated computing.
Special Issue on System Support for Intermittent Computing
Submission Deadline: October 1, 2016. View PDF.
Low-power computing devices that harvest radio waves, vibration, light, etc. are key enablers of emerging applications, including infrastructure sensing, medical implants, IoT. A key challenge for energy-harvesting systems is that they operate only intermittently as energy is available. Systems may power off hundreds of times per second and when power fails, software, peripherals, and memory are disrupted. The intermittent execution model presents system designers with fundamentally new system design challenges that we must solve to make energy-harvesting computers viable. Today’s circuits, architectures, software & compilers, programming languages, and even programmers all assume that energy is continuously available. Intermittence invalidates this assumption demanding that we rethink s all layers of the system stack. This issue invites submissions solving problems faced by intermittent systems, with an emphasis on cross-cutting work with contributions in multiple areas.
Special Issue on Cognitive Computing with Emerging Technology
Submission Deadline: October 15, 2016. View PDF.
Over the last several decades, Dennard scaling and Moore’s law have dramatically improved the capabilities of Von Neumann-style computing systems – where “memory” delivers instructions and data to a dedicated “processing unit”. However, as scaling limitations of 2-D ICs are becoming more apparent, there is a growing interest in innovations that will ensure that future computing systems continue to be exponentially-more-capable than the systems of today.
In particular, cognitive computing systems inspired by facets of the human brain such as unsupervised, autonomous and continuous learning, are emerging as a promising alternative. Research in this area often involves cross-disciplinary exploration at multiple scales, combining new materials and devices with novel architectural concepts and integration schemes. Targeting the broad device, circuit, and architecture, as well as nanotechnology research communities, this special issue seeks papers on innovative new concepts for such systems. High-risk high-reward type of ideas, rethinking system design at multiple scales, will be preferred to incremental research. While many of these systems will not rely on non-Von Neumann architectures, the call does not preclude massively parallel systems with conventional hardware components, where novel integration and/or packaging could enable new capabilities such as the high degree of connectivity and collective functions reminiscent of the neocortex and other natural systems.
Special Issue on Emerging Technologies and Architectures for Manycore Computing
Submission Deadline: December 1, 2016. View PDF.
The pursuit of Moore's Law is slowing and the exploration of alternative devices is underway to replace the CMOS transistor and traditional architectures at the heart of data processing. Moreover, the emergence of stringent application constraints, particularly those linked to energy consumption, require new system architectural strategies (e.g. manycore) and real-time operational adaptability approaches. Such complex systems require new and powerful design and programming methods to ensure optimal and reliable operation. This special issue aims at collating new research along all the dimensions of emerging technologies and architectures for computing in manycores.
General Call for Papers
Access Recently Published TMSCS Articles
Sign up for the Transactions Connection Newsletter.
TMSCS is financially cosponsored by:
TMSCS is technically cosponsored by: