The Community for Technology Leaders
Green Image
Issue No. 08 - August (2011 vol. 23)
ISSN: 1041-4347
pp: 1169-1181
Rene Mueller , ETH Zurich, Zurich
Jens Teubner , ETH Zurich, Zurich
Gustavo Alonso , ETH Zurich, Zurich
ABSTRACT
Computing frequent items is an important problem by itself and as a subroutine in several data mining algorithms. In this paper, we explore how to accelerate the computation of frequent items using field-programmable gate arrays (FPGAs) with a threefold goal: increase performance over existing solutions, reduce energy consumption over CPU-based systems, and explore the design space in detail as the constraints on FPGAs are very different from those of traditional software-based systems. We discuss three design alternatives, each one of them exploiting different FPGA features and each one providing different performance/scalability trade-offs. An important result of the paper is to demonstrate how the inherent massive parallelism of FPGAs can improve performance of existing algorithms but only after a fundamental redesign of the algorithms. Our experimental results show that, e.g., the pipelined solution we introduce can reach more than 100 million tuples per second of sustained throughput (four times the best available results to date) by making use of techniques that are not available to CPU-based solutions. Moreover, and unlike in software approaches, the high throughput is independent of the skew of the Zipf distribution of the input and at a far lower energy cost.
INDEX TERMS
Data mining, reconfigurable hardware, parallelism and concurrency.
CITATION
Rene Mueller, Jens Teubner, Gustavo Alonso, "Frequent Item Computation on a Chip", IEEE Transactions on Knowledge & Data Engineering, vol. 23, no. , pp. 1169-1181, August 2011, doi:10.1109/TKDE.2010.216
92 ms
(Ver )