IEEE Transactions on Emerging Topics in Computing

Covering aspects of computer science, computing technology, and computing applications not currently covered by other IEEE Computer Society Transactions

From the January-March 2015 issue

MMCD: Cooperative Downloading for Highway VANETs

By Kaoru Ota, Mianxiong Dong, Shan Chang, and Hongzi Zhu

Featured article thumbnail imageAdvances in low-power wireless communications and microelectronics make a great impact on a transportation system and pervasive deployment of roadside units (RSUs) is promising to provide drive-thru Internet to vehicular users anytime and anywhere. Downloading data packets from the RSU, however, is not always reliable because of high mobility of vehicles and high contention among vehicular users. Using intervehicle communication, cooperative downloading can maximize the amount of data packets downloaded per user request. In this paper, we focus on effective data downloading for real-time applications (e.g., video streaming and online game) where each user request is prioritized by the delivery deadline. We propose a cooperative downloading algorithm, namely, max-throughput and min-delay cooperative downloading (MMCD), which minimizes an average delivery delay of each user request while maximizing the amount of data packets downloaded from the RSU. The performance of MMCD is evaluated by extensive simulations and results demonstrate that our algorithm can reduce mean delivery delay while gaining downloading throughput as high as that of a state-of-the-art method although vehicles highly compete for access to the RSU in a conventional highway scenario.

download PDF View the PDF of this article      csdl View this issue in the digital library

Editorials and Announcements


  • Beginning in 2015, IEEE Transactions on Emerging Topics in Computing has moved to our hybrid open access publishing model. Authors can now select between either Traditional manuscript submission or Open Access (author-pays OA) manuscript submission. Learn more.

  • EICs Undergoing Reappointment for 2016-2017 Terms: IEEE Computer Society publications have editors in chief who are currently standing for reap­pointment to a second two-year term. The Publications Board invites comments upon the tenures of the individual editors. Please click here for more details.

  • A Welcome Letter from Thomas M. Conte (PDF)

  • We are pleased to announce that Fabrizio Lombardi, a professor at Northeastern University, Boston, has been appointed as the inaugural EIC for the IEEE Transactions on Emerging Technologies in Computing, effective immediately. Dr. Lombardi is an IEEE fellow, a member of the Computer Society Board of Governors, and is a past EIC and Associate EIC of the IEEE Transactions on Computers.


Reviewers list

Author Index

Call for Papers

Technical Tracks

View PDF.

IEEE Transactions on Emerging Topics in Computing (TETC) seeks original manuscripts for submission under Technical Tracks. In a track the technical contents of a submitted manuscript must be of an emerging nature and fall within the scope and competencies of the Computer Society. Manuscripts not abiding by these specifications will be administratively rejected. The topics of interest for the Technical Tracks are as follows:

  • Enterprise Computing Systems
  • Computational Networks
  • Hardware and Embedded System Security
  • Educational Computing
  • High Performance Computing
  • Next Generation Wireless Computing Systems

Submitted articles must describe original research which is not published or currently under review by other journals or conferences. Extended conference papers should be identified in the submission process and have considerable novel technical content; all submitted manuscripts will be screened using a similarity checker tool. As an author, you are responsible for understanding and adhering to our submission guidelines. You can access them at the IEEE Computer Society web site, Please thoroughly read these before submitting your manuscript.

Please submit your paper to Manuscript Central at and select the "Technical Track" option in the drop-down menu for "Manuscript Type".

Please address all other correspondence regarding this Call For Papers to Fabrizio Lombardi, EIC of IEEE TETC,

Special Issue on Big Data Benchmarks, Performance Optimization, and Emerging Hardware

Submission deadline: June 1, 2015. View PDF.

Big data are emerging as a strategic property of nations and organizations. There are driving needs to generate values from big data. However, the sheer volume of big data requires significant storage capacity, transmission bandwidth, computation, and power consumption. It is expected that systems with unprecedented scales can resolve the problems caused by varieties of big data with daunting volumes. Nevertheless, without big data benchmarks, it is very difficult for big data owners to make a decision on which system is best for meeting with their specific requirements. They also face challenges on how to optimize the systems for specific or even comprehensive workloads. Meanwhile, researchers are also working on innovative data management systems, hardware architectures, and operating systems to improve performance in dealing with big data. This focus of this special issue will be on architecture and system support for big data systems.

Special Issue on Methods and Techniques for Processing Streaming Big Data in Datacentre Clouds

Submission deadline: June 1, 2015. View PDF.

Internet of Things (IoT) is a part of Future Internet and comprises many billions of Internet connected Objects (ICOs) or ‘things’ where things can sense, communicate, compute and potentially actuate as well as have intelligence, multi-modal interfaces, physical/ virtual identities and attributes. ICOs can include sensors, RFIDs, social media, actuators (such as machines/equipments fitted with sensors) as well as lab instruments (e.g., high energy physics synchrotron), and smart consumer appliances (smart TV, smart phone, etc.). The IoT vision has recently given rise to IoT big data applications that are capable of producing billions of data stream and tens of years of historical data to support timely decision making. Some of the emerging IoT big data applications, e.g. smart energy grids, syndromic bio-surveillance, environmental monitoring, emergency situation awareness, digital agriculture, and smart manufacturing, need to process and manage massive, streaming, and multi-dimensional (from multiple sources) data from geographically distributed data sources.

Despite recent technological advances of the data-intensive computing paradigms (e.g. the MapReduce paradigm, workflow technologies, stream processing engines, distributed machine learning frameworks) and datacentre clouds, large-scale reliable system-level software for IoT big data applications are yet to become commonplace. As new diverse IoT applications begin to emerge, there is a need for optimized techniques to distribute processing of the streaming data produced by such applications across multiple datacentres that combine multiple, independent, and geographically distributed software and hardware resources. However, the capability of existing data-intensive computing paradigms is limited in many important aspects such as: (i) they can only process data on compute and storage resources within a centralised local area network, e.g., a single cluster within a datacentre. This leads to unsatisfied Quality of Service (QoS) in terms of timeliness of decision making, resource availability, data availability, etc. as application demands increase; (ii) they do not provide mechanisms to seamlessly integrate data spread across multiple distributed heterogeneous data sources (ICOs); (iii) lack support for rapid formulation of intuitive queries over streaming data based on general purpose concepts, vocabularies and data discovery; and (iv) they do not provide any decision making support for selecting optimal data mining and machine algorithms, data application programming frameworks, and NoSQL database systems based on nature of the big data (volume, variety, and velocity). Furthermore, adoption of existing datacentre cloud platform for hosting IoT applications is yet to be realised due to lack of techniques and software frameworks that can guarantee QoS under uncertain big data application behaviours (data arrival rate, number of data sources, decision making urgency, etc.), unpredictable datacentre resource conditions (failures, availability, malfunction, etc.) and capacity demands (bandwidth, memory, storage, and CPU cycles). It is clear that existing data intensive computing paradigms and related datacentre cloud resource provisioning techniques fall short of the IoT big data challenge or do not exist.

Special Issue on Approximate and Stochastic Computing Circuits, Systems and Algorithms

Submission deadline: September 1, 2015. View PDF.

The last decade has seen renewed interest in non-traditional computing paradigms. Several (re-)emerging paradigms are aimed at leveraging the error resiliency of many systems by releasing the strict requirement of exactness in computing. This special issue of TETC focuses on two specific lines of research, known as approximate and stochastic computing.

Approximate computing is driven by considerations of energy efficiency. Applications such as multimedia, recognition, and data mining are inherently error-tolerant and do not require perfect accuracy in computation. The results of signal processing algorithms used in image and video processing are ultimately left to human perception. Therefore, strict exactness may not be required and an imprecise result may suffice. In these applications, approximate circuits aim to improve energy-efficiency by maximally exploiting the tolerable loss of accuracy and trading it for energy and area savings.

Stochastic computing is a paradigm that achieves fault-tolerance and area savings through randomness. Information is represented by random binary bit streams, where the signal value is encoded by the probability of obtaining a one versus a zero. The approach is applicable for data intensive applications such as signal processing where small fluctuations can be tolerated but large errors are catastrophic. In such contexts, it offers savings in computational resources and provides tolerance to errors. This fault tolerance scales gracefully to high error rates. The focus of this special issue will be on the novel design and analysis of approximate and stochastic computing circuits, systems, algorithms and applications.

Special Issue/Section on Low-Power Image Recognition

Submission deadline: September 1, 2015. View PDF.

Digital images have become integral parts of everyday life. It is estimated that 10 million images are uploaded to social networks each hour and 100 hours of video uploaded for sharing each minute. Sophisticated image / video processing has fundamentally changed how people interact. For example, automatic classification or tagging can mediate how photographs are disseminated to friends. Many of today’s images are captured using smartphones, and cameras in smartphones can be used for a wide range of imaging applications, from high-fidelity location estimation to posture analysis. Image processing is computationally intense and can consume significant amounts of energy on mobile systems. This special issue focuses on the intersection of image recognition and energy conservation. Papers should describe energy efficient systems that perform object detection and recognition in images.

Special Issue/Section on Defect and Fault Tolerance in VLSI and Nanotechnology Systems

Submission deadline: December 1, 2015. View PDF.

The continuous scaling of CMOS devices as well as the increased interest in the use of emerging technologies make more and more important the topics related to defect and fault tolerance in VLSI and nanotechnology systems. All aspects of design, manufacturing, test, reliability, and availability that are affected by defects during manufacturing and by faults during system operation, are of interest. The IEEE Transaction on Emerging Topics in Computing (TETC) seeks original manuscripts for a Special Section on Defect and Fault Tolerance in VLSI Systems scheduled to appear in the December issue of 2016.

Special Issue/Section on Emerging Computational Paradigms and Architectures for Multicore Platforms

Submission deadline: December 1, 2015. View PDF.

Multicore and many core embedded architectures are emerging as computational platforms in many application domains ranging for high performance computing to deeply embedded systems. The new generations of parallel systems, both homogeneous and heterogeneous that are developed on top of these architectures represent what is called the emerging computing continuum paradigm. A successful evolution of this paradigm is however imposing various challenges from both an architectural and a programming point of view. The design of embedded multicores/manycores requires innovative hardware specification and modeling strategies, as well as low power simulation, analysis and testing. New synthesis approaches, possibly including reliability and variability compensation, are key issues in the coming technology nodes. Furthermore, thermal aware design is mandatory to manage power density issues. The design of effective interconnection networks is a key enabling technology in a manycore paradigm. New solutions such as photonics and RF NoCs architectures are emerging solutions on this regard. At the same time, these new interconnection systems have to be compliant with innovative 3D VLSI packaging technologies involving vertical interconnections in 3D and stacked ICs. These design solutions enable the integration of more and more IPs, resulting in heterogeneous platform where reconfigurable components, multi-DSP engines and GPUs collaborate to provide the target performance and energy requirements. Along with design and architectural innovations, many challenges have to be faced to enable an effective programming environment to many core systems. These challenges call from innovative solutions at the various levels of the programming toolchain, including compilers, programming models, runtime management and operating systems aspects. Holistic and cross-layer programming approaches have to be targeted considering not only performance, but also energy, dependability and real-time requirements. Finally, on the application side, multicore/manycore embedded systems are pushing developments in various domains such as biomedical, health care, internet of things, smart mobility, and aviation.

This special issue/section asks for emerging computation technology aspects related, but not limited to the mentioned topics. Contributions must be original and highlight emerging computation technologies in design, testing and programming multicore and manycore systems.

Special Issue/Section on New Paradigms in Ad Hoc, Sensor and Mesh Networks, From Theory to Practice

Submission deadline: December 1, 2015. View PDF.

Ad hoc, sensor and mesh networks have attracted significant attention by academia and industry in the past decade. In recent years however new paradigms have emerged due to the large increase in number and processing power of smart phones and other portable devices. Furthermore, new applications and emerging technologies have created new research challenges for ad hoc networks. The emergence of new operational paradigms such as Smart Home and Smart City, Body Area Networks and E-Health, Device-to-Device Communications, Machine-to-Machine Communications, Software Defined Networks, the Internet of Things, RFID, and Small Cells require substantial changes in traditional ad hoc networking. The focus of this special issue is on novel applications, protocols and architectures, non-traditional measurement, modeling, analysis and evaluation, prototype systems, and experiments in ad hoc, sensor and mesh networks.

Access recently published TETC Articles

RSS Subscribe to the RSS Feed of latest TETC Content Added to the Digital Library.

Mail Sign up for the Transactions Connection Newsletter.