From the January-March 2014 issue

Test Versus Security: Past and Present

By Jean Da Rolt, Amitabh Das, Giorgio Di Natale, Marie-Lise Flottes, Bruno Rouzeyre, and Ingrid Verbauwhede

Featured article thumbnail imageCryptographic circuits need to be protected against side-channel attacks, which target their physical attributes while the cryptographic algorithm is in execution. There can be various side-channels, such as power, timing, electromagnetic radiation, fault response, and so on. One such important side-channel is the design-for-testability (DfT) infrastructure present for effective and timely testing of VLSI circuits. The attacker can extract secret information stored on the chip by scanning out test responses against some chosen plaintext inputs. The purpose of this paper is to first present a detailed survey on the state-of-the-art in scan-based side-channel attacks on symmetric and public-key cryptographic hardware implementations, both in the absence and presence of advanced DfT structures, such as test compression and X-masking, which may make the attack difficult. Then, the existing scan attack countermeasures are evaluated for determining their security against known scan attacks. In addition, JTAG vulnerability and security countermeasures are also analyzed as part of the external test interface. A comparative area-timing-security analysis of existing countermeasures at various abstraction levels is presented in order to help an embedded security designer make an informed choice for his intended application.

download PDF View the PDF of this article      csdl View this issue in the digital library


Editorials and Announcements

Announcements

  • A Welcome Letter from Thomas M. Conte (PDF)
  • We are pleased to announce that Fabrizio Lombardi, a professor at Northeastern University, Boston, has been appointed as the inaugural EIC for the IEEE Transactions on Emerging Technologies in Computing, effective immediately. Dr. Lombardi is an IEEE fellow, a member of the Computer Society Board of Governors, and is a past EIC and Associate EIC of the IEEE Transactions on Computers.

Editorials


Call for Papers

Special Issue on Emerging Topics in the Design of High Performance Internet Routers

Submission deadline: December 1, 2014. View PDF.

Internet traffic growth is very rapid due to many popular internet applications such as real-time entertainment and P2P file-sharing. These applications involve a great deal amount of data for transferring through the Internet. Therefore, to maintain good quality of service, the internet routers resolve issues such as link speed, data throughput, and packet forwarding rate. Internet routers consult the destination address of each packet received and perform IP lookups in their router tables to determine the next hop for packets. High-performance routers require high-speed IP address lookup to achieve wire-speed packet forwarding. Then, using information in its routing table or routing algorithm, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it reaches its destination node. When multiple routers are used in interconnected networks, the routers exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections. It also contains firmware for different networking communications protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another. The focus of this special issue will be on routing algorithms, routing table design, routing protocol specifying, security strategy, and IPv6 deployment.

Special Issue on Parallel Programming and Architecture Support for Many-core Embedded Systems

Submission deadline: December 1, 2014. View PDF.

Embedded system designs have evolved over time from fairly simple unicore single memory based designs to small homogeneous processing units connected by an on-chip network on the same silicon. The number of cores to be integrated in a single chip is expected to rapidly increase in the coming years, moving from multi-core to many-core architectures. This requires a global rethinking of software and hardware design approaches. The purpose of this special issue is to solicit papers discussing the latest advancements in embedded many-core system designs with a focus on parallel programming and architectures support issues. It is intended to provide an opportunity to exchange the most recent research ideas and results, initiating constructive discussion between international researchers from industry and academia.

Special Issue on Circuit and System Design Methodologies for Emerging Technologies

Submission deadline: February 1, 2015. View PDF.

The demand for ever smaller, portable, low-power and high-performance electronic systems has been the primary driver for CMOS technology scaling. As CMOS scaling approaches physical limits, it has been fraught with challenges that required introduction of newer processes and materials. High-? oxide and metal-gate stack was introduced to mitigate oxide leakage. Thin body, undoped channels were introduced mitigate subthreshold leakage. 3D transistors such as FinFET and trigates were introduced to improve ON current while maintaining layout efficiency. While these incremental adjustments have allowed CMOS technology to scale, a number of alternative devices have been proposed to replace CMOS transistors such as graphene transistors (GFET), tunnel transistors, graphene nanoribbon tunnel transistors, quantum-dots and single-electron devices (SET). Newer memory technologies such as resistive RAMs, memristors, STT-RAMs similarly promise to revolutionize the design landscape. However for these alternative technologies to become practical, design methodologies that allow efficient modeling, design space exploration, and trade-off analysis is crucial. This is the driving motivation for this special issue.

Special Issue on Emerging Security Trends for Deeply-Embedded Computing Systems

Submission deadline: February 1, 2015. View PDF.

Unlike traditional embedded systems, nowadays, emerging computing systems are embedded in every aspect of human lives. These deeply-embedded computing systems often perform extremely sensitive tasks, and in some cases, such as health-care IT, these are life-saving. Thus, in addition to the security threats to traditional embedded systems, emerging deeply-embedded computing systems exhibit a larger attack surface, prone to more serious or life-threatening malicious attacks. These call for revisiting traditional security mechanisms not only because of the new facets of threats and more adverse effects of breaches, but also due to the resource limitations of these often-battery-powered and extremely-constrained computing systems. As such, new trends for providing security for deeply embedded systems are emerging; many of which abandoning use of cryptographic computations or make use of lightweight crypto-systems, feasible for these computing platforms. Indeed, there exists paramount potential for applying these emerging security approaches to sensitive applications such as health-care IT for implantable medical devices, big data analytics and machine learning in deeply embedded systems, smart buildings, and smart fabrics. The focus of this special issue will be on novel security methods for deeply-embedded computing systems, emerging cryptographic solutions applicable to extremely-constrained applications such as green cryptography, and advancements in feasible security measures for evolving interdisciplinary research trends such as computing for: health-care IT, cyber-physical embedded systems, big data, and smart buildings/fabrics.

Special Issue on Advances in Mobile Cloud Computing

Submission deadline: March 1, 2015. View PDF.

There is a phenomenal burst of research activities in mobile cloud computing, which extends cloud computing functions, services, and results to the world of future mobile communications applications, and the paradigm of cloud computing and virtualization to mobile networks. Mobile applications demand greater resources and improved interactivity for better user experience. Resources in cloud computing platforms such as Amazon, Google AppEngine and Microsoft Azure are a natural fit to remedy the lack of local resources in mobile devices. The availability of cloud computing resources on a pay-as-you-go basis, the advances in network virtualization, software defined networks, and the emergence of advanced wireless networks such as cloud-based radio access networks (C-RANs) create a new space of rich research problems. The objective of this special section is to cover the most recent research and development on the technologies for mobile cloud computing. This special section is to offer a venue for industry and academia to show case their recent progresses and potential research directions on the mobile cloud computing technologies.

Special Issue on Emerging Trends in Education

Submission deadline: March 1, 2015. View PDF.

Technological advancements, such as those seen in cloud computing, mobile devices, and big, open and linked data, to name just a few, bring with them great opportunities for broadening the reach of, and enriching, the educational experience. For instance, virtual learning environments are becoming commonplace in the communication between students and teachers, who can use a plethora of web-based tools and applications to publish assignments and submit them for grading, perform automatic assessment, etc. At the same time, mobile computing is contributing at expanding the reach of learning content and frameworks, which are becoming accessible in an always-on and ready-to-go fashion. By leveraging on smartphones and tablets, new pedagogical tools are being implemented which exploit their innovative interaction capabilities and the rich set of sensors to create immersive and interactive experiences not previously possible. Furthermore, the massive volume of information produced in these tools and environments opens greater possibilities including the sharing, analysis, and visualization of education data patterns and their trends. Although there are many different visions for education in the future, great efforts will be needed to reach a profound integration between the technologies that are already well-established and those that are considered as emerging. By building on a solid scientific and methodological foundation where theory and practice converge, this special issue aims to present both the current trends that characterize the learning and teaching domains of today as well the expected evolution that will shape the education of tomorrow.

Special Issue on Big Data Benchmarks, Performance Optimization, and Emerging Hardware

Submission deadline: June 1, 2015. View PDF.

Big data are emerging as a strategic property of nations and organizations. There are driving needs to generate values from big data. However, the sheer volume of big data requires significant storage capacity, transmission bandwidth, computation, and power consumption. It is expected that systems with unprecedented scales can resolve the problems caused by varieties of big data with daunting volumes. Nevertheless, without big data benchmarks, it is very difficult for big data owners to make a decision on which system is best for meeting with their specific requirements. They also face challenges on how to optimize the systems for specific or even comprehensive workloads. Meanwhile, researchers are also working on innovative data management systems, hardware architectures, and operating systems to improve performance in dealing with big data. This focus of this special issue will be on architecture and system support for big data systems.

Special Issue on Methods and Techniques for Processing Streaming Big Data in Datacentre Clouds

Submission deadline: June 1, 2015. View PDF.

Internet of Things (IoT) is a part of Future Internet and comprises many billions of Internet connected Objects (ICOs) or ‘things' where things can sense, communicate, compute and potentially actuate as well as have intelligence, multi-modal interfaces, physical/ virtual identities and attributes. ICOs can include sensors, RFIDs, social media, actuators (such as machines/equipments fitted with sensors) as well as lab instruments (e.g., high energy physics synchrotron), and smart consumer appliances (smart TV, smart phone, etc.). The IoT vision has recently given rise to IoT big data applications that are capable of producing billions of data stream and tens of years of historical data to support timely decision making. Some of the emerging IoT big data applications, e.g. smart energy grids, syndromic bio-surveillance, environmental monitoring, emergency situation awareness, digital agriculture, and smart manufacturing, need to process and manage massive, streaming, and multi-dimensional (from multiple sources) data from geographically distributed data sources.

Despite recent technological advances of the data-intensive computing paradigms (e.g. the MapReduce paradigm, workflow technologies, stream processing engines, distributed machine learning frameworks) and datacentre clouds, large-scale reliable system-level software for IoT big data applications are yet to become commonplace. As new diverse IoT applications begin to emerge, there is a need for optimized techniques to distribute processing of the streaming data produced by such applications across multiple datacentres that combine multiple, independent, and geographically distributed software and hardware resources. However, the capability of existing data-intensive computing paradigms is limited in many important aspects such as: (i) they can only process data on compute and storage resources within a centralised local area network, e.g., a single cluster within a datacentre. This leads to unsatisfied Quality of Service (QoS) in terms of timeliness of decision making, resource availability, data availability, etc. as application demands increase; (ii) they do not provide mechanisms to seamlessly integrate data spread across multiple distributed heterogeneous data sources (ICOs); (iii) lack support for rapid formulation of intuitive queries over streaming data based on general purpose concepts, vocabularies and data discovery; and (iv) they do not provide any decision making support for selecting optimal data mining and machine algorithms, data application programming frameworks, and NoSQL database systems based on nature of the big data (volume, variety, and velocity). Furthermore, adoption of existing datacentre cloud platform for hosting IoT applications is yet to be realised due to lack of techniques and software frameworks that can guarantee QoS under uncertain big data application behaviours (data arrival rate, number of data sources, decision making urgency, etc.), unpredictable datacentre resource conditions (failures, availability, malfunction, etc.) and capacity demands (bandwidth, memory, storage, and CPU cycles). It is clear that existing data intensive computing paradigms and related datacentre cloud resource provisioning techniques fall short of the IoT big data challenge or do not exist.

Special Issue on Approximate and Stochastic Computing Circuits, Systems and Algorithms

Submission deadline: September 1, 2015. View PDF.

The last decade has seen renewed interest in non-traditional computing paradigms. Several (re-)emerging paradigms are aimed at leveraging the error resiliency of many systems by releasing the strict requirement of exactness in computing. This special issue of TETC focuses on two specific lines of research, known as approximate and stochastic computing.

Approximate computing is driven by considerations of energy efficiency. Applications such as multimedia, recognition, and data mining are inherently error-tolerant and do not require perfect accuracy in computation. The results of signal processing algorithms used in image and video processing are ultimately left to human perception. Therefore, strict exactness may not be required and an imprecise result may suffice. In these applications, approximate circuits aim to improve energy-efficiency by maximally exploiting the tolerable loss of accuracy and trading it for energy and area savings.

Stochastic computing is a paradigm that achieves fault-tolerance and area savings through randomness. Information is represented by random binary bit streams, where the signal value is encoded by the probability of obtaining a one versus a zero. The approach is applicable for data intensive applications such as signal processing where small fluctuations can be tolerated but large errors are catastrophic. In such contexts, it offers savings in computational resources and provides tolerance to errors. This fault tolerance scales gracefully to high error rates. The focus of this special issue will be on the novel design and analysis of approximate and stochastic computing circuits, systems, algorithms and applications.

Special Issue/Section on Low-Power Image Recognition

Submission deadline: September 1, 2015. View PDF.

Digital images have become integral parts of everyday life. It is estimated that 10 million images are uploaded to social networks each hour and 100 hours of video uploaded for sharing each minute. Sophisticated image / video processing has fundamentally changed how people interact. For example, automatic classification or tagging can mediate how photographs are disseminated to friends. Many of today's images are captured using smartphones, and cameras in smartphones can be used for a wide range of imaging applications, from high-fidelity location estimation to posture analysis. Image processing is computationally intense and can consume significant amounts of energy on mobile systems. This special issue focuses on the intersection of image recognition and energy conservation. Papers should describe energy efficient systems that perform object detection and recognition in images.

General Call for Papers: IEEE Transactions on Emerging Topics in Computing

Submit your manuscript at www.computer.org/tetc. TETC aggressively seeks proposals for Special Sections and Issues focusing on emerging topics. TETC is an open access journal, which allows for wider dissemination of information. Prospective Guest Editors should contact the TETC EIC Fabrizio Lombardi at lombardi@ece.neu.edu for further details.

View complete call for papers.


Access recently published TETC Qrticles

RSS Subscribe to the RSS Feed of latest TETC Content Added to the Digital Library.

Mail Sign up for the Transactions Connection Newsletter.


 

IEEE Transactions on Emerging Topics in Computing (TETC) is now accepting manuscript submissions. To submit your manuscript, please use the ScholarOne Manuscripts manuscript submission site.