From the January-March 2014 issue
Test Versus Security: Past and Present
By Jean Da Rolt, Amitabh Das, Giorgio Di Natale, Marie-Lise Flottes, Bruno Rouzeyre, and Ingrid Verbauwhede
Cryptographic circuits need to be protected against side-channel attacks, which target their physical attributes while the cryptographic algorithm is in execution. There can be various side-channels, such as power, timing, electromagnetic radiation, fault response, and so on. One such important side-channel is the design-for-testability (DfT) infrastructure present for effective and timely testing of VLSI circuits. The attacker can extract secret information stored on the chip by scanning out test responses against some chosen plaintext inputs. The purpose of this paper is to first present a detailed survey on the state-of-the-art in scan-based side-channel attacks on symmetric and public-key cryptographic hardware implementations, both in the absence and presence of advanced DfT structures, such as test compression and X-masking, which may make the attack difficult. Then, the existing scan attack countermeasures are evaluated for determining their security against known scan attacks. In addition, JTAG vulnerability and security countermeasures are also analyzed as part of the external test interface. A comparative area-timing-security analysis of existing countermeasures at various abstraction levels is presented in order to help an embedded security designer make an informed choice for his intended application.
Editorials and Announcements
- A Welcome Letter from Thomas M. Conte (PDF)
- We are pleased to announce that Fabrizio Lombardi, a professor at Northeastern University, Boston, has been appointed as the inaugural EIC for the IEEE Transactions on Emerging Technologies in Computing, effective immediately. Dr. Lombardi is an IEEE fellow, a member of the Computer Society Board of Governors, and is a past EIC and Associate EIC of the IEEE Transactions on Computers.
Call for Papers
Special Issue on Emerging Mobile and Ubiquitous Systems
Submission deadline: July 1, 2014. View PDF.
Over the last few years there has been a renewed interest in the area of mobile and ubiquitous systems. The expansion of IP networks across the world has made it far easier for Machine-to-Machine (M2M) communication to take place and has lessened the amount of power and time necessary for information to be communicated between machines. The Internet of Things becomes a non-deterministic and open network in which auto-organized objects are interoperable and able to act independently depending on the context, circumstances or environments. It leverages the capacity to collect and analyze the digital traces when interacting with widely deployed smart things to discover the knowledge about human life, environment interaction, as well as social connection/behavior. This special issue also includes emerging systems and applications that combine mobile/ubiquitous computing with cloud computing, social networks, data mining, cyber-physical systems, service computing, etc.
Special Issue on Advances in Neuromorphic and Analog VLSI Computing
Submission deadline: August 1, 2014. View PDF.
Over the last few years there has been a renewed interest in the area of neuromorphic and analog VLSI computing. As an alternative to digital computation and digital signal processing, neuromorphic and analog VLSI processors exploit computational primitives inherent in the device physics, similar to principles that have been observed in neurobiology. As a result, very high computational densities and energy efficiencies can be potentially achieved using massively parallel architectures. This is particularly true for sensory signal processing and recognition systems where precise computing is not mission critical. On the other end of the spectrum, massive parallel neuromorphic computing systems are enabling near real-time simulations of biological systems ranging from a single neuron to the functional level at the scale of a mammalian brain. As such, there exists a tremendous potential for applying neuromorphic and analog VLSI computing techniques to mobile devices, biomedical systems, unattended sensors for defense and security systems, and cognitive computing systems. The focus of this special issue will be on novel neuromorphic and analog VLSI computing algorithms, non-traditional neuromorphic and analog VLSI circuits, algorithm and circuit co-design, and emerging applications.
Special Issue on Advances in Semantic Computing
Submission deadline: September 1, 2014. View PDF.
Semantic Computing (SC) is an emerging field that addresses computing technologies which allow users to search, create and manipulate computational resources (including data, documents, tools, people, devices, etc.) based on semantics ("meaning", "intention"). Semantic Computing includes the computing technologies (e.g., artificial intelligence, natural language, software engineering, data and knowledge engineering, computer systems, signal processing, etc.), and their interactions, that may be used to extract or process computational content and descriptions. While some areas of Semantic Computing have appeared as isolated pieces in individual disciplines, Semantic Computing glues these pieces together into an integrated theme with synergetic interactions. It addresses not only the analysis and transformation of signals (e.g., pixels, words) into useful information, but also how such information can be accessed and used to synthesize new signals.
Special Issue on Cyber Security
Submission deadline: September 1, 2014. View PDF.
Cyber Security is a topic which is getting a very high level of attention from researchers, decision makers, policy makers and from the general public. The value of digital information is growing dramatically. Physical systems coupled with computing devices (so-called cyber-physical systems) carry out functions that are fundamental for our society. Protecting these emerging critical digital infrastructures is an increasingly relevant objective from a military and political point of view. For this reason, the IEEE Transactions on Emerging Topics in Computing (TETC) seek original manuscripts for a Special Issue on Emerging Topics in Cyber Security, scheduled to appear in the first issue of 2015. TETC is the newest Transactions of the IEEE Computer Society, and it uses an Open Access model exclusively.
Papers may present advances in the theory, design, implementation, analysis, verification, or empirical evaluation and measurement of cyber security systems, to deal with emerging computing technologies and applications. Given the the peculiar nature of TETC, we are seeking in particular papers that are more "far-reaching" than is usual for journal submissions, as long as they show promise for opening up new areas of study, or questioning long-held beliefs and tenets of the cybersecurity field.
Special Issue on Emerging Topics in the Design of High Performance Internet Routers
Submission deadline: September 1, 2014. View PDF.
Internet traffic growth is very rapid due to many popular internet applications such as real-time entertainment and P2P file-sharing. These applications involve a great deal amount of data for transferring through the Internet. Therefore, to maintain good quality of service, the internet routers resolve issues such as link speed, data throughput, and packet forwarding rate. Internet routers consult the destination address of each packet received and perform IP lookups in their router tables to determine the next hop for packets. High-performance routers require high-speed IP address lookup to achieve wire-speed packet forwarding. Then, using information in its routing table or routing algorithm, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it reaches its destination node. When multiple routers are used in interconnected networks, the routers exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections. It also contains firmware for different networking communications protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another. The focus of this special issue will be on routing algorithms, routing table design, routing protocol specifying, security strategy, and IPv6 deployment.
Special Issue on Reproducible Research Methodologies
Submission deadline: September 1, 2014. View PDF.
Computer science and engineering research fields increasingly rely on numerous ad hoc methods to explore research breakthroughs, particularly during empirical and statistical analysis, modeling, optimization and simulation of complex computer systems. These ad hoc methods are utilized due to a variety of factors including problem complexity and size, speed of advancement and return on investment, cost of designing prototypes, and minimal access to state-of-the-art fabrication. However, lack of a common experimental methodology, and lack of simple and unified mechanisms, tools and repositories to preserve and exchange the whole experimental setup including all past research artifacts makes it excessively challenging or even impossible to accurately reproduce experimental results for evaluation and future advancement.
The special issue on reproducible research methodologies is an invitation to papers covering topics in hardware and software analysis, modeling, optimization, run-time adaptation, simulation and co-design. Papers are solicited to address scientific (and possibly interdisciplinary) methods to design research environments, experimental methodologies, and gold standards for trustable and reproducible research methodologies in computer science and engineering disciplines. Further, papers on public frameworks and repositories to preserve and exchange research artifacts and experimental setups are also encouraged.
Special Issue on Parallel Programming and Architecture Support for Many-core Embedded Systems
Submission deadline: December 1, 2014. View PDF.
Embedded system designs have evolved over time from fairly simple unicore single memory based designs to small homogeneous processing units connected by an on-chip network on the same silicon. The number of cores to be integrated in a single chip is expected to rapidly increase in the coming years, moving from multi-core to many-core architectures. This requires a global rethinking of software and hardware design approaches. The purpose of this special issue is to solicit papers discussing the latest advancements in embedded many-core system designs with a focus on parallel programming and architectures support issues. It is intended to provide an opportunity to exchange the most recent research ideas and results, initiating constructive discussion between international researchers from industry and academia.
Special Issue on Circuit and System Design Methodologies for Emerging Technologies
Submission deadline: February 1, 2015. View PDF.
The demand for ever smaller, portable, low-power and high-performance electronic systems has been the primary driver for CMOS technology scaling. As CMOS scaling approaches physical limits, it has been fraught with challenges that required introduction of newer processes and materials. High-? oxide and metal-gate stack was introduced to mitigate oxide leakage. Thin body, undoped channels were introduced mitigate subthreshold leakage. 3D transistors such as FinFET and trigates were introduced to improve ON current while maintaining layout efficiency. While these incremental adjustments have allowed CMOS technology to scale, a number of alternative devices have been proposed to replace CMOS transistors such as graphene transistors (GFET), tunnel transistors, graphene nanoribbon tunnel transistors, quantum-dots and single-electron devices (SET). Newer memory technologies such as resistive RAMs, memristors, STT-RAMs similarly promise to revolutionize the design landscape. However for these alternative technologies to become practical, design methodologies that allow efficient modeling, design space exploration, and trade-off analysis is crucial. This is the driving motivation for this special issue.
Special Issue on Emerging Security Trends for Deeply-Embedded Computing Systems
Submission deadline: February 1, 2015. View PDF.
Unlike traditional embedded systems, nowadays, emerging computing systems are embedded in every aspect of human lives. These deeply-embedded computing systems often perform extremely sensitive tasks, and in some cases, such as health-care IT, these are life-saving. Thus, in addition to the security threats to traditional embedded systems, emerging deeply-embedded computing systems exhibit a larger attack surface, prone to more serious or life-threatening malicious attacks. These call for revisiting traditional security mechanisms not only because of the new facets of threats and more adverse effects of breaches, but also due to the resource limitations of these often-battery-powered and extremely-constrained computing systems. As such, new trends for providing security for deeply embedded systems are emerging; many of which abandoning use of cryptographic computations or make use of lightweight crypto-systems, feasible for these computing platforms. Indeed, there exists paramount potential for applying these emerging security approaches to sensitive applications such as health-care IT for implantable medical devices, big data analytics and machine learning in deeply embedded systems, smart buildings, and smart fabrics. The focus of this special issue will be on novel security methods for deeply-embedded computing systems, emerging cryptographic solutions applicable to extremely-constrained applications such as green cryptography, and advancements in feasible security measures for evolving interdisciplinary research trends such as computing for: health-care IT, cyber-physical embedded systems, big data, and smart buildings/fabrics.
Special Issue on Advances in Mobile Cloud Computing
Submission deadline: March 1, 2015. View PDF.
There is a phenomenal burst of research activities in mobile cloud computing, which extends cloud computing functions, services, and results to the world of future mobile communications applications, and the paradigm of cloud computing and virtualization to mobile networks. Mobile applications demand greater resources and improved interactivity for better user experience. Resources in cloud computing platforms such as Amazon, Google AppEngine and Microsoft Azure are a natural fit to remedy the lack of local resources in mobile devices. The availability of cloud computing resources on a pay-as-you-go basis, the advances in network virtualization, software defined networks, and the emergence of advanced wireless networks such as cloud-based radio access networks (C-RANs) create a new space of rich research problems. The objective of this special section is to cover the most recent research and development on the technologies for mobile cloud computing. This special section is to offer a venue for industry and academia to show case their recent progresses and potential research directions on the mobile cloud computing technologies.
Special Issue on Methods and Techniques for Processing Streaming Big Data in Datacentre Clouds
Submission deadline: June 1, 2015. View PDF.
Internet of Things (IoT) is a part of Future Internet and comprises many billions of Internet connected Objects (ICOs) or ‘things' where things can sense, communicate, compute and potentially actuate as well as have intelligence, multi-modal interfaces, physical/ virtual identities and attributes. ICOs can include sensors, RFIDs, social media, actuators (such as machines/equipments fitted with sensors) as well as lab instruments (e.g., high energy physics synchrotron), and smart consumer appliances (smart TV, smart phone, etc.). The IoT vision has recently given rise to IoT big data applications that are capable of producing billions of data stream and tens of years of historical data to support timely decision making. Some of the emerging IoT big data applications, e.g. smart energy grids, syndromic bio-surveillance, environmental monitoring, emergency situation awareness, digital agriculture, and smart manufacturing, need to process and manage massive, streaming, and multi-dimensional (from multiple sources) data from geographically distributed data sources.
Despite recent technological advances of the data-intensive computing paradigms (e.g. the MapReduce paradigm, workflow technologies, stream processing engines, distributed machine learning frameworks) and datacentre clouds, large-scale reliable system-level software for IoT big data applications are yet to become commonplace. As new diverse IoT applications begin to emerge, there is a need for optimized techniques to distribute processing of the streaming data produced by such applications across multiple datacentres that combine multiple, independent, and geographically distributed software and hardware resources. However, the capability of existing data-intensive computing paradigms is limited in many important aspects such as: (i) they can only process data on compute and storage resources within a centralised local area network, e.g., a single cluster within a datacentre. This leads to unsatisfied Quality of Service (QoS) in terms of timeliness of decision making, resource availability, data availability, etc. as application demands increase; (ii) they do not provide mechanisms to seamlessly integrate data spread across multiple distributed heterogeneous data sources (ICOs); (iii) lack support for rapid formulation of intuitive queries over streaming data based on general purpose concepts, vocabularies and data discovery; and (iv) they do not provide any decision making support for selecting optimal data mining and machine algorithms, data application programming frameworks, and NoSQL database systems based on nature of the big data (volume, variety, and velocity). Furthermore, adoption of existing datacentre cloud platform for hosting IoT applications is yet to be realised due to lack of techniques and software frameworks that can guarantee QoS under uncertain big data application behaviours (data arrival rate, number of data sources, decision making urgency, etc.), unpredictable datacentre resource conditions (failures, availability, malfunction, etc.) and capacity demands (bandwidth, memory, storage, and CPU cycles). It is clear that existing data intensive computing paradigms and related datacentre cloud resource provisioning techniques fall short of the IoT big data challenge or do not exist.
Special Issue on Approximate and Stochastic Computing Circuits, Systems and Algorithms
Submission deadline: September 1, 2015. View PDF.
The last decade has seen renewed interest in non-traditional computing paradigms. Several (re-)emerging paradigms are aimed at leveraging the error resiliency of many systems by releasing the strict requirement of exactness in computing. This special issue of TETC focuses on two specific lines of research, known as approximate and stochastic computing.
Approximate computing is driven by considerations of energy efficiency. Applications such as multimedia, recognition, and data mining are inherently error-tolerant and do not require perfect accuracy in computation. The results of signal processing algorithms used in image and video processing are ultimately left to human perception. Therefore, strict exactness may not be required and an imprecise result may suffice. In these applications, approximate circuits aim to improve energy-efficiency by maximally exploiting the tolerable loss of accuracy and trading it for energy and area savings.
Stochastic computing is a paradigm that achieves fault-tolerance and area savings through randomness. Information is represented by random binary bit streams, where the signal value is encoded by the probability of obtaining a one versus a zero. The approach is applicable for data intensive applications such as signal processing where small fluctuations can be tolerated but large errors are catastrophic. In such contexts, it offers savings in computational resources and provides tolerance to errors. This fault tolerance scales gracefully to high error rates. The focus of this special issue will be on the novel design and analysis of approximate and stochastic computing circuits, systems, algorithms and applications.
General Call for Papers: IEEE Transactions on Emerging Topics in Computing
Submit your manuscript at www.computer.org/tetc. TETC aggressively seeks proposals for Special Sections and Issues focusing on emerging topics. TETC is an open access journal, which allows for wider dissemination of information. Prospective Guest Editors should contact the TETC EIC Fabrizio Lombardi at email@example.com for further details.
Access recently published TETC Qrticles
Subscribe to the RSS Feed of latest TETC Content Added to the Digital Library.
Sign up for the Transactions Connection Newsletter.