IEEE Transactions on Multi-Scale Computing Systems
From the October-December 2015 issue
An Ultra-Low Power, "Always-On" Camera Front-End for Posture Detection in Body Worn Cameras Using Restricted Boltzman Machines
By Soham Jayesh Desai, Mohammed Shoaib, and Arijit Raychowdhury
The Internet of Things (IoTs) has triggered rapid advances in sensors, surveillance devices, wearables and body area networks with advanced Human-Computer Interfaces (HCI). One such application area is the adoption of Body Worn Cameras (BWCs) by law enforcement officials. The need to be ‘always-on’ puts heavy constraints on battery usage in these camera front-ends, thus limiting their widespread adoption. Further, the increasing number of such cameras is expected to create a data deluge, which requires large processing, transmission and storage capabilities. Instead of continuously capturing and streaming or storing videos, it is prudent to provide “smartness” to the camera front-end. This requires hardware assisted image recognition and template matching in the front-end, capable of making judicious decisions on when to trigger video capture or streaming. Restricted Boltzmann Machines (RBMs) based neural networks have been shown to provide high accuracy for image recognition and are well suited for low power and re-configurable systems. In this paper we propose an RBM based “always-on’’ camera front-end capable of detecting human posture. Aggressive behavior of the human being in the field of view will be used as a wake-up signal for further data collection and classification. The proposed system has been implemented on a Xilinx Virtex 7 XC7VX485T platform. A minimum dynamic power of 19.18 mW for a target recognition accuracy while maintaining real time constraints has been measured. The hardware-software co-design illustrates the trade-offs in the design with respect to accuracy, resource utilization, processing time and power. The results demonstrate the possibility of a true “always-on” body-worn camera system in the IoT environment.
Editorials and Announcements
- We're pleased to announce that Partha Pratim Pande, professor at Washington State University, has accepted the position of inaugural Editor-in-Chief.
- Introduction to IEEE Transactions on Multiscale Computing Systems (TMSCS) (Jan-March 2015)
- Welcome Message (Jan-March 2015)
- Emerging Memory Technologies—Modeling, Design, and Applications for Multi-Scale Computing (July-Sept 2015)
- Wearables, Implants, and Internet of Things (April-June 2015)
Call for Papers
Special Issue on Design and Applications of Neuromorphic Computing System
Submission Deadline Extended: February 8, 2016. View PDF.
As artificial intelligence technology becomes pervasive in society and ubiquitous in our lives, the desire for embedded-everywhere and human-centric computational intelligence systems calls for an intelligent computation paradigm. However, the applications of machine learning and neural networks involve large, noisy, incomplete, natural data sets that do not lend themselves to convenient solutions from current systems. Neuromorphic systems that are inspired by the working mechanism of human brains possess a massively parallel architecture with closely coupled memory and computing. This special issue aims at the computing methodology and systems across multiple technology scales to accelerate the development the neuromorphic hardware systems and the adoption for machine learning applications.
Special Issue on Accelerated Computing
Submission Deadline: June 15, 2016. View PDF.
Accelerated Computing is a computing model whereby calculations are carried out on specialized hardware (known as accelerator) in tandem with traditional CPUs to achieve faster, lower-power, and even more reliable execution. Accelerators are highly specialized hardware components that can execute a specific functionality very efficiently. The specific functionality that can execute efficiently on the accelerator executes on the accelerator, while the remainder of the code continues to run on the CPU. Accelerators come in several forms. Hardware accelerators, e.g., LTE, GNSS, GSM modules found on modern smartphones are essentially ASICs that can accelerate a particular computing functionality very well, and will provide the best power and performance of the functionality possible. However, they are not programmable, and that limits their usage. There is a large variety of programmable accelerators, from the vector processors (e.g., Intel SSE, and PowerPC Altivec) to GPGPUs (General Purpose Graphics Processing Units). Programmable accelerators trade-off some power-efficiency for easier adoption, and higher re-usability. Field Programmable Gate Array or FPGAs, while primarily used for fast prototyping, are being used as programmable, and reconfigurable accelerators. Accelerators by design have a lot of computing resources, and therefore naturally provide computational redundancy. This can be used to provide fault tolerance through redundant computation.
While accelerated computing has been a topic of lot of extremely influential research, there are still a lot of issues that need further investigation to increase the effectiveness and efficiency of accelerated computing. Novel accelerator architectures are needed for emerging applications, e.g., deep neural networks, brain simulations etc. New programming models are needed to effectively and dynamically balance the workload among the computational resources in the accelerator, and also between the accelerator and the CPU. Of course the programming models should keep programming natural and easy; otherwise, it becomes hard to understand, debug and maintain code. Several models of communication between the CPU and accelerators have been developed. However what is the right model for a given application must be determined. Given a computing fabric with several CPUs and several accelerators, how to specify the application, how to divide and execute the application to achieve efficient computing is still an open challenge. This special issue aims at collating new research along all the dimensions of accelerated computing.
General Call for Papers
Access Recently Published TMSCS Articles
Sign up for the Transactions Connection Newsletter.
TMSCS is financially cosponsored by:
TMSCS is technically cosponsored by: