2012 16th Panhellenic Conference on Informatics (2010)
Sept. 10, 2010 to Sept. 12, 2010
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/PCI.2010.47
Hadoop is a fault tolerant Java framework that supports data distribution and process parallelization using commodity hardware. Based on the provided scalability and the independence of task execution, we combined Hadoop with crawling techniques to implement various applications that deal with large amount of data. Our experiments show that Hadoop is a very useful and trustworthy tool for creating distributed programs that perform better in terms of computational efficiency.
parallelization, distributed systems, Hadoop, commodity hardware, live data collection
Konstantinos Chalkias, Kyriacos Talattinis, George Stephanides, Aikaterini Sidiropoulou, "Parallel Collection of Live Data Using Hadoop", 2012 16th Panhellenic Conference on Informatics, vol. 00, no. , pp. 66-71, 2010, doi:10.1109/PCI.2010.47