The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.11 - November (2011 vol.33)
pp: 2131-2146
Alexander Toet , TNO Human Factors, Soesterberg
ABSTRACT
The predictions of 13 computational bottom-up saliency models and a newly introduced Multiscale Contrast Conspicuity (MCC) metric are compared with human visual conspicuity measurements. The agreement between human visual conspicuity estimates and model saliency predictions is quantified through their rank order correlation. The maximum of the computational saliency value over the target support area correlates most strongly with visual conspicuity for 12 of the 13 models. A simple multiscale contrast model and the MCC metric both yield the largest correlation with human visual target conspicuity ({>}0.84). Local image saliency largely determines human visual inspection and interpretation of static and dynamic scenes. Computational saliency models therefore have a wide range of important applications, like adaptive content delivery, region-of-interest-based image compression, video summarization, progressive image transmission, image segmentation, image quality assessment, object recognition, and content-aware image scaling. However, current bottom-up saliency models do not incorporate important visual effects like crowding and lateral interaction. Additional knowledge about the exact nature of the interactions between the mechanisms mediating human visual saliency is required to develop these models further. The MCC metric and its associated psychophysical saliency measurement procedure are useful tools to systematically investigate the relative contribution of different feature dimensions to overall visual target saliency.
INDEX TERMS
Saliency, image analysis, visual search.
CITATION
Alexander Toet, "Computational versus Psychophysical Bottom-Up Image Saliency: A Comparative Evaluation Study", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.33, no. 11, pp. 2131-2146, November 2011, doi:10.1109/TPAMI.2011.53
REFERENCES
[1] S.J. Dickinson et al., "Active Object Recognition Integrating Attention and Viewpoint Control," Computer Vision and Image Understanding, vol. 67, no. 3, pp. 239-260, 1997.
[2] C. Koch and S. Ullman, "Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry," Human Neurobiology, vol. 4, no. 4, pp. 219-227, 1985.
[3] M. Land, N. Mennie, and J. Rusted, "The Roles of Vision and Eye Movements in the Control of Activities of Daily Living," Perception, vol. 28, no. 11, pp. 1311-1328, 1999.
[4] A. Yarbus, Eye Movements and Vision. Plenum Press, 1967.
[5] J.H. Fecteau and D.P. Munoz, "Salience, Relevance, and Firing: A Priority Map for Target Selection," Trends in Cognitive Sciences, vol. 10, no. 8, pp. 382-390, 2006.
[6] A.H. Wertheim, "Visual Conspicuity: A New Simple Standard, Its Reliability, Validity and Applicability," Ergonomics, vol. 53, no. 3, pp. 421-442, 2010.
[7] E. Niebur and C. Koch, "Computational Architectures for Attention," The Attentive Brain, R. Parasuraman, ed., pp. 163-186, MIT Press, 1998.
[8] D. Parkhurst, K. Law, and E. Niebur, "Modeling the Role of Salience in the Allocation of Overt Visual Attention," Vision Research, vol. 42, no. 1, pp. 107-123, 2002.
[9] B.W. Tatler, R.J. Baddeley, and I.D. Gilchrist, "Visual Correlates of Fixation Selection: Effects of Scale and Time," Vision Research, vol. 45, no. 5, pp. 643-659, 2005.
[10] G. Underwood and T. Foulsham, "Visual Saliency and Semantic Incongruency Influence Eye Movements When Inspecting Pictures," The Quarterly J. Experimental Psychology, vol. 59, no. 11, pp. 1931-1949, 2006.
[11] L. Itti, "Quantifying the Contribution of Low-Level Saliency to Human Eye Movements in Dynamic Scenes," Visual Cognition, vol. 12, no. 6, pp. 1093-1123, 2005.
[12] L. Elazary and L. Itti, "Interesting Objects Are Visually Salient," J. Vision, vol. 8, no. 3, pp. 1-15, 2009.
[13] F.L. Engel, "Visual Conspicuity and Selective Background Interference in Eccentric Vision," Vision Research, vol. 14, no. 7, pp. 459-471, 1974.
[14] F.L. Engel, "Visual Conspicuity, Visual Search and Fixation Tendencies of the Eye," Vision Research, vol. 17, no. 1, pp. 95-108, 1977.
[15] F.L. Engel, "Visual Conspicuity, Directed Attention and Retinal Locus," Vision Research, vol. 11, no. 6, pp. 563-575, 1971.
[16] W.S. Geisler and K.-L. Chou, "Separation of Low-Level and High-Level Factors in Complex Tasks: Visual Search," Psychological Rev., vol. 102, no. 2, pp. 356-378, 1995.
[17] A. Toet et al., "Visual Conspicuity Determines Human Target Acquisition Performance," Optical Eng., vol. 37, no. 7, pp. 1969-1975, 1998.
[18] A. Toet and P. Bijl, "Visual Conspicuity," Encyclopedia of Optical Eng., R.G. Driggers, ed., pp. 2929-2935, Marcel Dekker Inc., 2003.
[19] Y.-F. Ma and H.-J. Zhang, "Contrast-Based Image Attention Analysis by Using Fuzzy Growing," Proc. 11th ACM Int'l Conf. Multimedia, pp. 374-381, 2003.
[20] C. Christopoulos, A. Skodras, and T. Ebrahimi, "The JPEG2000 Still Image Coding System: An Overview," IEEE Trans. Consumer Electronics, vol. 46, no. 4, pp. 1103-1127, Nov. 2000.
[21] C. Guo and L. Zhang, "A Novel Multiresolution Spatiotemporal Saliency Detection Model and Its Applications in Image and Video Compression," IEEE Trans. Image Processing, vol. 19, no. 1, pp. 185-198, Jan. 2010.
[22] S. Marat, M. Guironnet, and D. Pellerin, "Video Summarization Using a Visual Attentional Model," Proc. 15th European Signal Processing Conf., pp. 1784-1788, 2007.
[23] R. Rodriguez-Sánchez et al., "The Relationship between Information Prioritization and Visual Distinctness in Two Progressive Image Transmission Schemes," Pattern Recognition, vol. 37, no. 2, pp. 281-297, 2004.
[24] B.C. Ko and J.-Y. Nam, "Object-of-Interest Image Segmentation Based on Human Attention and Semantic Region Clustering," J. Optical Soc. of Am. A, vol. 23, no. 10, pp. 2462-2470, 2006.
[25] J. Han et al., "Unsupervised Extraction of Visual Attention Objects in Color Images," IEEE Trans. Circuits and Systems for Video Technology, vol. 16, no. 1, pp. 141-145, Jan. 2006.
[26] Q. Ma and L. Zhang, "Saliency-Based Image Quality Assessment Criterion," Proc. Fourth Int'l Conf. Intelligent Computing: Advanced Intelligent Computing Theories and Applications—with Aspects of Theoretical and Methodological Issues, pp. 1124-1133, 2008.
[27] A. Niassi et al., "Does Where You Gaze on an Image Affect Your Perception of Quality? Applying Visual Attention to Image Quality Metric," Proc. IEEE Int'l Conf. Image Processing, vol. 2, pp. 169-172, 2007.
[28] Q. Ma, L. Zhang, and B. Wang, "New Strategy for Image and Video Quality Assessment," J. Electronic Imaging, vol. 19, pp. 1-14, 2010.
[29] D. Walther et al., "Selective Visual Attention Enables Learning and Recognition of Multiple Objects in Cluttered Scenes," Computer Vision and Image Understanding, vol. 100, nos. 1/2, pp. 41-63, 2005.
[30] S. Avidan and A. Shamir, "Seam Carving for Content-Aware Image Resizing," ACM Trans. Graphics, vol. 26, no. 3, pp. 1-9, 2007.
[31] N.D.B. Bruce and J.K. Tsotsos, "Saliency, Attention, and Visual Search: An Information Theoretic Approach," J. Vision, vol. 9, no. 3, pp. 1-24, 2009.
[32] N.D.B. Bruce and J.K. Tsotsos, "Saliency Based on Information Maximization," Advances in Neural Information Processing Systems, Y. Weiss, B. Schölkopf, and J. Platt, eds., vol. 18, pp. 155-162, MIT Press, 2009.
[33] N.D.B. Bruce and J.K. Tsotsos, "An Information Theoretic Model of Saliency and Visual Search," Proc. Fourth Int'l Workshop Attention in Cognitive Systems: Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint, L. Paletta and E. Rome, eds., pp. 171-183, 2008.
[34] N.D.B. Bruce, "Features That Draw Visual Attention: An Information Theoretic Perspective," Neurocomputing, vols. 65/66, pp. 125-133, 2005.
[35] C. Kanan et al., "SUN: Top-Down Saliency Using Natural Statistics," Visual Cognition, vol. 17, nos. 6/7, pp. 979-1003, 2009.
[36] L. Zhang et al., "SUN: A Bayesian Framework for Saliency Using Natural Statistics," J. Vision, vol. 8, no. 7, pp. 1-20, 2008.
[37] M. Mancas et al., "A Rarity-Based Visual Attention Map-Application to Texture Description," Proc. IEEE Int'l Conf. Image Processing, pp. 445-448, 2006.
[38] M. Mancas, B. Gosselin, and B. Macq, "Perceptual Image Representation," J. Image and Video Processing, vol. 2007, pp. 1-9, 2007.
[39] M. Mancas, Computational Attention: Towards Attentive Computers. Presses Universitaires de Louvain, 2007.
[40] M. Mancas et al., "Computational Attention for Event Detection," Proc. Fifth Int'l Conf. Computer Vision Systems, 2007.
[41] M. Mancas, "Image Perception: Relative Influence of Bottom-Up and Top-Down Attention," Proc. Int'l Workshop Attention and Performance in Computer Vision in conjunction with the Int'l Conf. Computer Vision Systems, pp. 94-106, 2008.
[42] M. Mancas et al., "Tracking-Dependent and Interactive Video Projection," Proc. Fourth Int'l Summer Workshop Multi-Modal Interfaces, pp. 73-83, 2009.
[43] P.L. Rosin, "A Simple Method for Detecting Salient Regions," Pattern Recognition, vol. 42, no. 11, pp. 2363-2371, 2009.
[44] D. Walther, "Interactions of Visual Attention and Object Recognition: Computational Modeling, Algorithms, and Psychophysics," PhD thesis, California Inst. of Tech nology, 2006.
[45] D. Walther and C. Koch, "Modeling Attention to Salient Proto-Objects," Neural Networks, vol. 19, no. 9, pp. 1395-1407, 2006.
[46] L. Itti, C. Koch, and E. Niebur, "A Model of Saliency-Based Visual Attention for Rapid Scene Analysis," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254-1259, Nov. 1998.
[47] P. Bian and L. Zhang, "Biological Plausibility of Spectral Domain Approach for Spatiotemporal Visual Saliency," Advances in Neuro-Information Processing, pp. 251-258, Springer, 2009.
[48] R. Achanta et al., "Frequency-Tuned Salient Region Detection," Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition, June 2009.
[49] J. Harel, C. Koch, and P. Perona, "Graph-Based Visual Saliency," Advances in Neural Information Processing Systems, B. Schölkopf, J. Platt, and T. Hoffman, eds., vol. 19, pp. 545-552, MIT Press, 2007.
[50] Y. Hu et al., "Salient Region Detection Using Weighted Feature Maps Based on the Human Visual Attention Model," Proc. Fifth Pacific Rim Conf. Multimedia, pp. 993-1000, 2005.
[51] X. Hou and L. Zhang, "Saliency Detection: A Spectral Residual Approach," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 1-8, 2007.
[52] R. Achanta et al., "Salient Region Detection and Segmentation," Proc. Int'l Conf. Computer Vision Systems, pp. 66-75, 2008.
[53] L. Itti and C. Koch, "Comparison of Feature Combination Strategies for Saliency-Based Visual Attention Systems," Proc. SPIE Conf. Human Vision and Electronic Imaging IV, B.E. Rogowitz and N.P. Thrasyvoulos, eds., pp. 473-482, 1999.
[54] S. Frintrop, M. Klodt, and E. Rome, "A Real-Time Visual Attention System Using Integral Images," Proc. Fifth Int'l Conf. Computer Vision Systems, 2007.
[55] K. Rapantzikos et al., "Spatiotemporal Saliency for Video Classification," Signal Processing: Image Comm., vol. 24, no. 7, pp. 557-571, 2009.
[56] H.J. Seo and P. Milanfar, "Static and Space-Time Visual Saliency Detection by Self-Resemblance," J. Vision, vol. 9, nos. 12-15, pp. 1-27, 2009.
[57] C. Guo and L. Zhang, "Spatio-Temporal Saliency Detection Using Phase Spectrum of Quaternion Fourier Transform," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, pp. 1-8, 2008.
[58] H. Shi and Y. Yang, "A Computational Model of Visual Attention Based on Saliency Maps," Applied Math. and Computation, vol. 188, no. 2, pp. 1671-1677, 2007.
[59] D. Gao, V. Mahadevan, and N. Vasconcelos, "On the Plausibility of the Discriminant Center-Surround Hypothesis for Visual Saliency," J. Vision, vol. 8, no. 7, pp. 1-18, 2008.
[60] V.K. Singh, S. Maji, and A. Mukerjee, "Confidence Based Updation of Motion Conspicuity in Dynamic Scenes," Proc. Third Canadian Conf. Computer and Robot Vision, pp. 13-20, 2006.
[61] G. Harding and M. Bloj, "Real and Predicted Influence of Image Manipulations on Eye Movements during Scene Recognition," J. Vision, vol. 10, nos. 2-8. pp. 1-17, 2010.
[62] O. Le Meur et al., "A Coherent Computational Approach to Model Bottom-Up Visual Attention," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 802-817, May 2006.
[63] L. Huang and H. Pashler, "Quantifying Object Salience by Equating Distractor Effects," Vision Research, vol. 45, no. 14, pp. 1909-1920, 2005.
[64] W. van Zoest and M. Donk, "Bottom-Up and Top-Down Control in Visual Search," Perception, vol. 33, no. 8, pp. 927-937, 2004.
[65] W. van Zoest and M. Donk, "The Effects of Salience on Saccadic Target Selection," Visual Cognition, vol. 12, no. 2, pp. 353-375, 2005.
[66] L. Itti and C. Koch, "A Saliency-Based Search Mechanism for Overt and Covert Shifts of Visual Attention," Vision Research, vol. 40, nos. 10-12, pp. 1489-1506, 2000.
[67] L. Itti, C. Gold, and C. Koch, "Visual Attention and Target Detection in Cluttered Natural Scenes," Optical Eng., vol. 40, no. 9, pp. 1784-1793, 2001.
[68] A.D. Clarke et al., "Visual Search for a Target against a 1/f(Beta) Continuous Textured Background," Vision Research, vol. 48, no. 21, pp. 2193-2203, 2008.
[69] C.M. Masciocchi et al., "Everyone Knows What Is Interesting: Salient Locations Which Should Be Fixated," J. Vision, vol. 9, no. 11, pp. 1-22, 2009.
[70] T. Liu et al., "Learning to Detect a Salient Object," Proc. IEEE CS Conf. Computer and Vision Pattern Recognition, pp. 1-8, 2007.
[71] N.D.B. Bruce, D.P. Loach, and J.K. Tsotsos, "Visual Correlates of Fixation Selection: A Look at the Spatial Frequency Domain," Proc. IEEE Int'l Conf. Image Processing, vol. 3, pp. III-289-III-292, 2009.
[72] D. Gao, V. Mahadevan, and N. Vasconcelos, "The Discriminant Center-Surround Hypothesis for Bottom-Up Saliency," Proc. Neural Information Processing Systems, pp. 497-504, 2007.
[73] H.J. Seo and P. Milanfar, "Nonparametric Bottom-Up Saliency Detection by Self-Resemblance," Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, First Int'l Workshop Visual Scene Understanding, 2009.
[74] A. Garcia-Diaz et al., "Saliency Based on Decorrelation and Distinctiveness of Local Responses," Proc. 13th Int'l Conf. Computer Analysis of Images and Patterns, pp. 261-268, 2009.
[75] A. Garcia-Diaz et al., "Decorrelation and Distinctiveness Provide with Human-Like Saliency," Proc. 11th Int'l Conf. Advanced Concepts for Intelligent Vision Systems, J. Blanc-Talon et al., eds., pp. 343-354, 2009.
[76] W. Einhäuser, M. Spain, and P. Perona, "Objects Predict Fixations Better Than Early Saliency," J. Vision, vol. 8, no. 14, pp. 1-26, 2008.
[77] R.J. Baddeley and B.W. Tatler, "High Frequency Edges (But Not Contrast) Predict Where We Fixate: A Bayesian System Identification Analysis," Vision Research, vol. 46, no. 18, pp. 2824-2833, 2006.
[78] A. Açik et al., "Effects of Luminance Contrast and Its Modifications on Fixation Behavior during Free Viewing of Images from Different Categories," Vision Research, vol. 49, no. 12, pp. 1541-1553, 2009.
[79] R. Carmi and L. Itti, "Visual Causes versus Correlates of Attentional Selection in Dynamic Scenes," Vision Research, vol. 46, no. 26. pp. 4333-4345, 2006.
[80] W. Einhäuser, U. Rutishauer, and C. Koch, "Task-Demands Can Immediately Reverse the Effects of Sensory-Driven Saliency in Complex Visual Stimuli," J. Vision, vol. 8, no. 2, pp. 1-19, 2008.
[81] T. Foulsham and G. Underwood, "How Does the Purpose of Inspection Influence the Potency of Visual Salience in Scene Perception?" Perception, vol. 36, no. 8, pp. 1123-1138, 2007.
[82] C. Roggeman, W. Fias, and T. Verguts, "Salience Maps in Parietal Cortex: Imaging and Computational Modeling," Neuroimage, vol. 52, no. 3, pp. 1005-1014, 2010.
[83] X. Chen and G.J. Zelinsky, "Real-World Visual Search is Dominated by Top-Down Guidance," Vision Research, vol. 46, no. 24, pp. 4118-4133, 2006.
[84] M. Nyström and J. Holmquist, "Semantic Override of Low-Level Features in Image Viewing—Both Initially and Overall," J. Eye Movement Research, vol. 2, pp. 1-11, 2008.
[85] J.A. Stirk and G. Underwood, "Low-Level Visual Saliency Does not Predict Change Detection in Natural Scenes," J. Vision, vol. 7, no. 10, pp. 1-10, 2007.
[86] G. Underwood et al., "Is Attention Necessary for Object Identification? Evidence from Eye Movements during the Inspection of Real-World Scenes," Consciousness and Cognition, vol. 17, no. 1, pp. 159-170, 2008.
[87] J.M. Henderson et al., "Visual Saliency Does Not Account for Eye Movements during Visual Search in Real-World Scenes," Eye Movements: A Window on the Mind and Brain, R.P.G. van Gompel et al., eds., pp. 537-562, Elsevier, 2007.
[88] T. Foulsham and G. Underwood, "What Can Saliency Models Predict about Eye Movements? Spatial and Sequential Aspects of Fixations during Encoding and Recognition," J. Vision, vol. 8, no. 2, pp. 6-17, 2008.
[89] G. Underwood, T. Foulsham, and K. Humphrey, "Saliency and Scan Patterns in the Inspection of Real-World Scenes: Eye Movements during Encoding and Recognition," Visual Cognition, vol. 17, nos. 6/7, pp. 812-834, 2009.
[90] J.R. Bloomfield, "Visual Search in Complex Fields: Size Differences between Target Disc and Surrounding Discs," Human Factors, vol. 14, no. 2, pp. 139-148, 1972.
[91] Y.M. Bowler, "Towards a Simplified Model of Visual Search," Visual Search, D. Brogan, ed., pp. 303-309, Taylor & Francis, 1990.
[92] B.L. Cole and S.E. Jenkins, "The Effect of Variability of Background Elements on the Conspicuity of Objects," Vision Research, vol. 24, no. 3, pp. 261-270, 1984.
[93] S.E. Jenkins and B.L. Cole, "The Effect of the Density of Background Elements on the Conspicuity of Objects," Vision Research, vol. 22, no. 10, pp. 1241-1252, 1982.
[94] F.J.A.M. Poirier, F. Gosselin, and M. Arguin, "Perceptive Fields of Saliency," J. Vision, vol. 8, no. 15, pp. 1-19, 2008.
[95] D.M. Levi, "Crowding—An Essential Bottleneck for Object Recognition: A Mini-Review," Vision Research, vol. 48, no. 5, pp. 635-654, 2008.
[96] A.H. Wertheim et al., "How Important Is Lateral Masking in Visual Search?" Experimental Brain Research, vol. 170, no. 3, pp. 387-402, 2006.
[97] S. Straube and M. Fahle, "Visual Detection and Identification are Not the Same: Evidence from Psychophysics and fMRI," Brain and Cognition, vol. 75, pp. 29-38, 2011.
[98] C.E. Shannon, "A Mathematical Theory of Communication," The Bell Systems Technical J., vol. 27, pp. 93-154, 1948.
[99] T.-W. Lee, M. Girolami, and T.J. Sejnowski, "Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources," Neural Computation, vol. 11, no. 2, pp. 417-441, 1999.
[100] J.F. Cardoso, "High-Order Contrasts for Independent Component Analysis," Neural Computing, vol. 11, no. 1, pp. 157-192, 1999.
[101] N.D.B. Bruce, "Image Analysis through Local Information Measures," Proc. 17th Int'l Conf. Pattern Recognition, J. Kittler, M. Petrou, and M.S. Nixon, eds., vol. 1, pp. 616-619, 2004.
[102] W.K. Pratt, Digital Image Processing, second ed., Wiley, 1991.
[103] G. Borgefors, "Distance Transformations in Digital Images," Computer Vision, Graphics, and Image Processing, vol. 34, no. 3, pp. 344-371, 1986.
[104] L. Itti and C. Koch, "Feature Combination Strategies for Saliency-Based Visual Attention Systems," J. Electronic Imaging, vol. 10, no. 1, pp. 161-169, 2001.
[105] J. Intriligator and P. Cavanagh, "The Spatial Resolution of Visual Attention," Cognitive Psychology, vol. 43, no. 3, pp. 171-216, 2001.
[106] T.A. Ell and S.J. Sangwine, "Hypercomplex Fourier Transforms of Color Images," IEEE Trans. Image Processing, vol. 16, no. 1, pp. 22-35, Jan. 2007.
[107] A.S. Ecker et al., "Decorrelated Neuronal Firing in Cortical Microcircuits," Science, vol. 327, no. 5965, pp. 584-587, 2010.
[108] D.J. Field, "Relations between the Statistics of Natural Images and the Response Properties of Cortical Cells," J. Optical Soc. of Am. A, vol. 4, no. 12, pp. 2379-2394, 1987.
[109] M.C. Morrone and D.C. Burr, "Feature Detection in Human Vision: A Phase-Dependent Energy Model," Proc. Royal Soc. of London B, vol. 235, no. 1280, pp. 221-245, 1988.
[110] P. Kovesi, "Image Features from Phase Congruency," Videre: J. Computer Vision Research, vol. 1, no. 3, pp. 2-26, 1999.
[111] T. Avraham and M. Lindenbaum, "Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 693-708, Apr. 2010.
[112] D. Nilsson, "An Efficient Algorithm for Finding the M Most Probable Configurations in Probabilistic Expert Systems," Statistics and Computing, vol. 8, no. 2, pp. 159-173, 1998.
[113] F. Liu and M. Gleicher, "Region Enhanced Scale-Invariant Saliency Detection," Proc. IEEE Int'l Conf. Multimedia and Expo, pp. 1477-1480, 2006.
[114] T. Kadir and M. Brady, "Saliency, Scale and Image Description," Int'l J. Computer Vision, vol. 45, no. 2, pp. 83-105, 2001.
[115] J. Serra, Image Analysis and Mathmatical Morphology. Academic Press, 1982.
[116] R. Lotufo and F. Zampirolli, "Fast Multidimensional Parallel Euclidean Distance Transform Based on Mathematical Morphology," Proc. 14th Brazilian Symp. Computer Graphics and Image Processing, T. Wu and D. Borges, eds., pp. 100-105, 2001.
[117] D. Gao and N. Vasconcelos, "Decision-Theoretic Saliency: Computational Principles, Biological Plausibility, and Implications for Neurophysiology and Psychophysics," Neural Computation, vol. 21, no. 1, pp. 239-271, 2009.
[118] S. Han and N. Vasconcelos, "Biologically Plausible Saliency Mechanisms Improve Feedforward Object Recognition," Vision Research, vol. 50, no. 22, pp. 2295-2307, 2010.
[119] A. Toet, P. Bijl, and J.M. Valeton, "Image Data Set for Testing Search and Detection Models," Optical Eng., vol. 40, no. 9, pp. 1760-1767, 2001.
[120] K.J. Cooke, P.A. Stanley, and J.L. Hinton, "The ORACLE Approach to Target Acquisition and Search Modelling," Vision Models for Target Detection and Recognition, E. Peli, ed., pp. 135-171, World Scientific, 1995.
[121] K.J. Cooke, "The ORACLE Handbook," BAe SRC JS12020, British Aerospace, Sowerby Research Center, 1992.
[122] G. Waldman and J. Wootton, Electro-Optical Systems Performance Modeling. Artech House, 1993.
[123] G. Waldman, J. Wootton, and G. Hobson, "Visual Detection with Search: An Empirical Model," IEEE Trans. Systems, Man, and Cybernetics, vol. 21, no. 3, pp. 596-606, May/June 1991.
[124] N.P. Travnikova, "Efficiency of Visual Search," Mashinostroyeniye, Moscow, USSR, 1984.
[125] N.P. Travnikova, "Search Efficiency of Telescopic Instruments," Soviet J. Optical Technology, vol. 51, no. 2, pp. 63-66, 1984,
[126] N.P. Travnikova, "Visual Detection of Extended Objects," Soviet J. Optical Technology, vol. 44, no. 3, pp. 166-169, 1977.
[127] A. Toet, P. Bijl, and J.M. Valeton, "Test of Three Visual Search and Detection Models," Optical Eng., vol. 39, no. 5, pp. 1344-1353, 2000.
[128] W.F. Alkhateeb, R.J. Morris, and K.H. Rudock, "Effects of Stimulus Complexity on Simple Spatial Discriminations," Spatial Vision, vol. 5, no. 2, pp. 129-141, 1990.
[129] H.C. Nothdurft, "Texture Segregation and Pop-Out from Orientation Contrast," Vision Research, vol. 31, no. 6, pp. 1073-1078, 1991.
[130] H.C. Nothdurft, "Feature Analysis and the Role of Similarity in Preattentive Vision," Perception & Psychophysics, vol. 52, no. 4, pp. 355-375, 1992.
[131] H.C. Nothdurft, "Saliency Effects across Dimensions in Visual Search," Vision Research, vol. 33, nos. 5/6, pp. 839-844, 1993.
[132] H.C. Nothdurft, "The Role of Features in Preattentive Vision: Comparison of Orientation, Motion and Color Cues," Vision Research, vol. 33, no. 14, pp. 1937-1958, 1993.
[133] H.C. Nothdurft, "The Conspicuousness of Orientation and Motion Contrast," Spatial Vision, vol. 7, no. 4, pp. 341-363, 1993.
[134] M. Pomplun et al., "Peripheral and Parafoveal Cueing and Masking Effects on Saccadic Selectivity in a Gaze-Contingent Window Paradigm," Vision Research, vol. 41, no. 21, pp. 2757-2769, 2001.
[135] K.G. Thompson and N.P. Bichot, "A Visual Salience Map in the Primate Frontal Eye Field," Progress in Brain Research, vol. 147, pp. 249-262, 2005.
[136] U. Rajashekar et al., "Foveated Analysis of Image Features at Fixations," Vision Research, vol. 47, no. 25, pp. 3160-3172, 2007.
[137] A. Toet and D.M. Levi, "The Two-Dimensional Shape of Spatial Interaction Zones in the Parafovea," Vision Research, vol. 32, no. 7, pp. 1349-1357, 1992.
[138] L. Itti, "Quantitative Modelling of Perceptual Salience at Human Eye Position," Visual Cognition, vol. 14, nos. 4-8, pp. 959-984, 2006.
[139] S. Straube and M. Fahle, "The Electrophysiological Correlate of Saliency: Evidence from a Figure-Detection Task," Brain Research, vol. 1307, no. 1, pp. 89-102, 2010.
[140] R.J. Peters et al., "Components of Bottom-Up Gaze Allocation in Natural Image," Vision Research, vol. 45, no. 18, pp. 2397-2416, 2005.
[141] A.R. Koene and L. Zhaoping, "Feature-Specific Interactions in Salience from Combined Feature Contrasts: Evidence for a Bottom-Up Saliency Map in V1," J. Vision, vol. 7, no. 7, pp. 1-14, 2007.
28 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool