The Community for Technology Leaders
2014 Sixth International Symposium on Parallel Architectures, Algorithms and Programming (PAAP) (2014)
Beijing, China
July 13, 2014 to July 15, 2014
ISSN: 2168-3034
ISBN: 978-1-4799-3844-5
pp: 110-113
ABSTRACT
Hadoop Distributed File System (HDFS) is the core component of Apache Hadoop project. In HDFS, the computation is carried out in the nodes where relevant data is stored. Hadoop also implemented a parallel computational paradigm named as Map-Reduce. In this paper, we have measured the performance of read and write operations in HDFS by considering small and large files. For performance evaluation, we have used a Hadoop cluster with five nodes. The results indicate that HDFS performs well for the files with the size greater than the default block size and performs poorly for the files with the size less than the default block size.
INDEX TERMS
File systems, Performance evaluation, Google, Writing, Educational institutions, Operating systems, Fault tolerance,Map-Reduce, Hadoop, Distributed File System, Hadoop Distributed File System
CITATION
Talluri Lakshmi Siva Rama Krishna, Thirumalaisamy Ragunathan, Sudheer Kumar Battula, "Performance Evaluation of Read and Write Operations in Hadoop Distributed File System", 2014 Sixth International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), vol. 00, no. , pp. 110-113, 2014, doi:10.1109/PAAP.2014.49
168 ms
(Ver 3.3 (11022016))