The Community for Technology Leaders
Computer Science and Information Engineering, World Congress on (2009)
Los Angeles, California USA
Mar. 31, 2009 to Apr. 2, 2009
ISBN: 978-0-7695-3507-4
pp: 630-634
Combined syntactical categories and sequence alignment algorithms are implemented and used to weed-out duplicate and near-duplicate web-pages from search engine results. The syntactical structures manifested as POS-tags were pre-processed using a POS tagger converting parts of a webpage's text into a string of tags. The produced string was then subjected into the longest Common Sequence (LCS) techniques (as is commonly done in computational biology), to detect duplicate and near-duplicate webpages. The process of tagging and aligning was based on set of sentences extracted from the web page as a representative of the pages. The query-keywords are used as a basis for sentence extraction. Results obtained from experiments performed have shown that such a combined approach can provide very interesting similarity calculation and re-ranking measure. This can be used with reasonable efficiency to detect duplications on search results generated by search engines such as Google. Similarity measurements obtained can be further used as a basis for text analysis of the search results allowing the detection of duplicate and near duplicates and clustering of documents in general.
Part-of-speech, POS, Longest Common Sequence, LCS, Copy Detection, Duplicate, Search Engine
Mohamed Elhadi, Amjad Al-Tobi, "Webpage Duplicate Detection Using Combined POS and Sequence Alignment Algorithm", Computer Science and Information Engineering, World Congress on, vol. 01, no. , pp. 630-634, 2009, doi:10.1109/CSIE.2009.771
98 ms
(Ver 3.1 (10032016))