
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Derek C. Stanford, Adrian E. Raftery, "Approximate Bayes Factors for Image Segmentation: The Pseudolikelihood Information Criterion (PLIC)," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 11, pp. 15171520, November, 2002.  
BibTex  x  
@article{ 10.1109/TPAMI.2002.1046170, author = {Derek C. Stanford and Adrian E. Raftery}, title = {Approximate Bayes Factors for Image Segmentation: The Pseudolikelihood Information Criterion (PLIC)}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {24}, number = {11}, issn = {01628828}, year = {2002}, pages = {15171520}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2002.1046170}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Approximate Bayes Factors for Image Segmentation: The Pseudolikelihood Information Criterion (PLIC) IS  11 SN  01628828 SP1517 EP1520 EPD  15171520 A1  Derek C. Stanford, A1  Adrian E. Raftery, PY  2002 KW  BIC KW  color image quantization KW  ICM algorithm KW  image segmentation KW  Markov random field KW  medical image KW  mixture model KW  posterior model probability KW  pseudolikelihood KW  satellite image. VL  24 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
Abstract—We propose a method for choosing the number of colors or true gray levels in an image; this allows fully automatic segmentation of images. Our underlying probability model is a hidden Markov random field. Each number of colors considered is viewed as corresponding to a statistical model for the image, and the resulting models are compared via approximate Bayes factors. The Bayes factors are approximated using BIC (Bayesian Information Criterion), where the required maximized likelihood is approximated by the QianTitterington pseudolikelihood. We call the resulting criterion PLIC (Pseudolikelihood Information Criterion). We also discuss a simpler approximation, MMIC (Marginal Mixture Information Criterion), which is based only on the marginal distribution of pixel values. This turns out to be useful for initialization and it also has moderately good performance by itself when the amount of spatial dependence in an image is low. We apply PLIC and MMIC to a medical image segmentation problem.
[1] S.E. Umbaugh, R.H. Moss, W.V. Stoecker, and G.A. Hance, “Automatic Color Segmentation Algorithms with Application to Skin Tumor Feature Identification,” IEEE Eng. in Medicine and Biology Magazine, vol. 12, pp. 7582, 1993.
[2] G.A. Hance, S.E. Umbaugh, R.H. Moss, and W.V. Stoecker, “Unsupervised Color Image Segmentation,” IEEE Eng. in Medicine and Biology Magazine, vol. 15, pp. 104111, 1996.
[3] J.W. Byng, J.P. Critten, and M.J. Yaffe, “ThicknessEqualization Processing for Mammographic Images,” Radiology, vol. 203, pp. 564568, 1997.
[4] D. Stanford, Fast Automatic Unsupervised Image Segmentation and Curve Detection in Spatial Point Processes. PhD thesis, Dept. of Statistics, Univ. of Washington, Seattle, 1999.
[5] J. Besag, “Statistical Analysis of Dirty Pictures,” J. Royal Statistical Soc., Series B, vol. 48, pp. 259302, 1986.
[6] R.E. Kass and A.E. Raftery, “Bayes Factors,” J. Am. Statistical Assoc., vol. 90, pp. 773795, 1995.
[7] A.E. Raftery, “Bayesian Model Selection in Social Research,” Sociological Methodology, P.V. Marsden, ed., pp. 111163, Cambridge: Blackwells, 1995.
[8] D.R. Cox and D. Hinkley, Problems and Solutions in Theoretical Statistics. London: Chapman and Hall, 1978.
[9] G. Schwarz, “Estimating the Dimension of a Model,” The Annals of Statistics, vol. 6, pp. 461464, 1978.
[10] R.E. Kass and L. Wasserman, “A Reference Bayesian Test for Nested Hypotheses and Its Relationship to the Schwarz Criterion,” J. Am. Statistical Assoc., vol. 90, pp. 928934, 1995.
[11] A.E. Raftery, “Bayes Factors and BICComment on‘A Critique of the Bayesian Information Criterion for Model Selection’,” Sociological Methods and Research, vol. 27, pp. 411427, 1999.
[12] C. Ji and L. Seymour, “A Consistent Model Selection Procedure for Markov Random Fields based on Penalized Pseudolikelihood,” Annals of Applied Probability, vol. 6, pp. 423443, 1996.
[13] W. Qian and D.M. Titterington, “Estimation of Parameters in Hidden Markov Models,” Philosophical Trans. Royal Soc. of London, vol. 337, pp. 407428, 1991.
[14] W. Qian and D.M. Titterington, “Stochastic Relaxations and EM Algorithms for Markov Random Fields,” J. Statistical Computing and Simulations, vol. 40, pp. 5569, 1991.
[15] K. Roeder and L. Wasserman, “Practical Bayesian Density Estimation Using Mixtures of Normals,” J. Am. Statistical Assoc., vol. 92, pp. 894902, 1997.
[16] C. Keribin, “Consistent Estimate of the Order of Mixture Models,” Comptes Rendus de l'Academie des Sciences, vol. 326, pp. 243248, 1998.
[17] A. Dasgupta and A.E. Raftery, “Detecting Features in Spatial Point Processes with Clutter via ModelBased Clustering,” J. Am. Statistical Assoc., vol. 93, pp. 294302, 1998.
[18] C. Fraley and A.E. Raftery, “How Many Clusters? Which Clustering Method?—Answers via ModelBased Cluster Analysis,” Computer J., vol. 41, pp. 578588, 1998.
[19] D.C. Stanford and A.E. Raftery, “Finding Curvilinear Features in Spatial Point Processes: Principal Curve Clustering with Noise,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, pp. 601609, 2000.
[20] J. Serra, Image Analysis and Mathematical Morphology. New York: Academic Press, 1982.
[21] S. Dixit, “Quantization of Color Images for Display Printing on Limited Color Output Devices,” Computers and Graphics, vol. 15, pp. 561567, 1991.
[22] R.M. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification,” IEEE Trans. Systems, Man, and Cybernetics, vol. 3, pp. 610621, 1973.
[23] R.M. Haralick, “Statistical and Structural Approaches to Texture,” Proc. IEEE, vol. 67, pp. 786804, 1979.
[24] F.M.J. Valckx and J.M. Thijssen, “Characterization of Echographic Image Texture by Cooccurrence Matrix Parameters,” Ultrasound in Medicine and Biology, vol. 23, pp. 559571, 1997.
[25] J.D. Banfield and A.E. Raftery, “ModelBased Gaussian and NonGaussian Clustering,” Biometrics, vol. 49, pp. 803821, 1993.
[26] C. Posse, “Hierarchical ModelBased Clustering for Large Datasets,” J. Computational and Graphical Statistics, to appear, 2000.
[27] J. Besag, J. York, and A. Mollie, “Bayesian Image Restoration, with Two Applications in Spatial Statistics,” Annals of the Inst. Math. Statistics, vol. 43, pp. 159, 1991.
[28] M.A. Newton and A.E. Raftery, “Approximate Bayesian Inference with the Weighted Likelihood Bootstrap (with discussion),” J. Royal Statistical Soc., Series B, vol. 56, pp. 348, 1994.
[29] S. Chib, “Marginal Likelihood from the Gibbs Output,” J. Am. Statistical Assoc., pp. 13131321, 1995.
[30] A.E. Raftery, “Hypothesis Testing and Model Selection,” Markov Chain Monte Carlo in Practice, W.R. Gilks, D.J. Spiegelhalter, and S. Richardson, eds., pp. 163188, London: Chapman and Hall, 1996.
[31] T.J. DiCiccio, R.E. Kass, A.E. Raftery, and L. Wasserman, “Computing Bayes Factors by Combining Simulation and Asymptotic Approximations,” J. Am. Statistical Assoc., vol. 92, pp. 903915, 1997.
[32] X. Descombes, R. Morris, and J. Zerubia, “Estimation of Markov Random Field Prior Parameters Using Markov Chain Monte Carlo Maximum Likelihood,” Technical Report 3015, INRIA, France, 1996.
[33] H. Tjelmeland and J. Besag, “Markov Random Fields with Higher Order Interactions,” Scandinavian J. Statistics, vol. 25, pp. 415434, 1998.