The Community for Technology Leaders
2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS) (2016)
Belfast and Dublin, Ireland
June 20, 2016 to June 24, 2016
ISSN: 2372-9198
ISBN: 978-1-4673-9037-8
pp: 235-240
ABSTRACT
This paper proposes a novel model, called Similarity Based on Visual Attention Features (SimVisual), to enhance the similarity analysis between images by considering features extracted from salient regions mapped by visual attention models. Visual attention models have demonstrated to be very useful for encoding perceptual semantic information of the image content. Thus, aggregating saliency features into the final image representation is a powerful asset to enhance the similarity analysis between images, while increasing the accuracy in retrieval tasks. The goal of SimVisual is to combine different saliency models with traditional image descriptors, aimed at increasing the descriptive power of these descriptors without modifying the original algorithms. We performed some experiments using a large dataset composed of 32 different biomedical images categories, and the results show that SimVisual boosts the retrieval accuracy up to 13% considering simple image descriptors, such as Color Histograms. The experiments on SimVisual shows that it is a valuable approach to increase the efficacy of content-based image retrieval systems, without user interactions.
INDEX TERMS
Visualization, Feature extraction, Computational modeling, Biomedical imaging, Image color analysis, Biological system modeling, Image retrieval
CITATION

G. V. Pedrosa and A. J. Traina, "Encoding Visual Attention Features for Effective Biomedical Images Retrieval," 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), Belfast and Dublin, Ireland, 2016, pp. 235-240.
doi:10.1109/CBMS.2016.22
91 ms
(Ver 3.3 (11022016))