IEEE Transactions on Multi-Scale Computing Systems

Expand your horizons with Colloquium, a monthly survey of abstracts from all CS transactions! Replaces OnlinePlus in January 2017.


From the October-December 2016 issue

Design of Resistive Synaptic Array for Implementing On-Chip Sparse Learning

By Pai-Yu Chen, Ligang Gao, and Shimeng Yu

Featured article thumbnail imageThe resistive cross-point array architecture has been proposed for on-chip implementation of weighted sum and weight update operations in neuro-inspired learning algorithms. However, several limiting factors potentially hamper the learning accuracy, including the nonlinearity and device variations in weight update, and the read noise, limited ON/OFF weight ratio and array parasitics in weighted sum. With unsupervised sparse coding as a case study algorithm, this paper employs device-algorithm co-design methodologies to quantify and mitigate the impact of these non-ideal properties on the accuracy. Our analysis shows that the realistic properties in weight update are tolerable, while those in weighted sum are detrimental to the accuracy. With calibration of realistic synaptic behaviors from experimental data, our study shows that the recognition accuracy of MNIST handwriting digits degrades from ∼96 to ∼30 percent. The strategies to mitigate this accuracy loss include 1) redundant cells to alleviate the impact of device variations; 2) a dummy column to eliminate the off-state current; and 3) selector and larger wire width to reduce IR drop along interconnects. The selector also reduces the leakage power in weight update. With improved properties by these strategies, the accuracy increases back to ∼95 percent, enabling reliable integration of realistic synaptic devices in neuromorphic systems.

download PDF View the PDF of this article      csdl View this issue in the digital library


Editorials and Announcements

Announcements

  • We're pleased to announce that Partha Pratim Pande, professor at Washington State University, has accepted the position of inaugural Editor-in-Chief.

Editorials


Guest Editorials


Call for Papers

Special Issue on Advances in Parallel Graph Processing: Algorithms, Architectures and Application Frameworks

Submission Deadline: March 1, 2017. View PDF.

In the sphere of modern data science and applications, graph algorithms have achieved a pivotal place in advancing the state of scientific discovery and knowledge. Nearly three centuries of ideas have made graph theory and its applications a mature area in computational sciences. Yet, today we find ourselves at crossroads between theory and application. Spurred by the digital revolution, data from over a diverse range of high throughput channels and devices, from across internet-scale applications, are starting to mark a new era in data-driven computing and discovery. Building robust graph models and implementing scalable graph application frameworks in the context of this new era are proving to be significant challenges. Concomitant to the digital revolution, we have also experienced an explosion in computing architectures, with a broad range of multicores, manycores, heterogeneous platforms and hardware accelerators (CPUs, GPUs) being actively developed and deployed within servers and multinode clusters. Recent advances have started to show that in more than one way, these two fields - graph theory and architectures - are capable of benefiting and in fact spurring new research directions in one another.

This special issue invites original research papers and authoritative position/survey papers that showcase cutting-edge research at the intersection of graph algorithms, graph applications and advanced architectures.

General Call for Papers

View PDF.


Access Recently Published TMSCS Articles

Mail Sign up for the Transactions Connection Newsletter.


TMSCS is financially cosponsored by:

IEEE Computer SocietyIEEE Communications SocietyIEEE Nanotechnology Council

 

TMSCS is technically cosponsored by:

IEEE Council on Electronic Design Automation