
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Maya R. Gupta, Robert M. Gray, Richard A. Olshen, "Nonparametric Supervised Learning by Linear Interpolation with Maximum Entropy," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 766781, May, 2006.  
BibTex  x  
@article{ 10.1109/TPAMI.2006.101, author = {Maya R. Gupta and Robert M. Gray and Richard A. Olshen}, title = {Nonparametric Supervised Learning by Linear Interpolation with Maximum Entropy}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {28}, number = {5}, issn = {01628828}, year = {2006}, pages = {766781}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2006.101}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Nonparametric Supervised Learning by Linear Interpolation with Maximum Entropy IS  5 SN  01628828 SP766 EP781 EPD  766781 A1  Maya R. Gupta, A1  Robert M. Gray, A1  Richard A. Olshen, PY  2006 KW  Nonparametric statistics KW  probabilistic algorithms KW  pattern recognition KW  maximum entropy KW  linear interpolation. VL  28 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. New York: SpringerVerlag, 2001.
[2] D. Loftsgaarden and C. Quesenberry, “A Nonparametric Estimate of a Multivariate Density Function,” Annals Math. Statistics, vol. 36, pp. 10491051, 1965.
[3] E. Fix and J.L. Hodges, “Discriminatory Analysis, Nonparametric Discrimination: Consistency Properties,” Technical Report 4, US Air Force School of Aviation Medicine, Tex., 1951.
[4] Y.P. Mack and M. Rosenblatt, “Multivariate kNearest Neighbor Density Estimates,” J. Multivariate Analysis, vol. 9, pp. 115, 1979.
[5] D.W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization. New York: Wiley, 1992.
[6] C.J. Stone, “Consistent Nonparametric Regression,” The Annals of Statistics, vol. 5, no. 4, pp. 595645, 1977.
[7] M.P. Friedlander and M.R. Gupta, “On Minimizing Distortion and Relative Entropy,” IEEE Trans. Information Theory, vol. 52, no. 1, pp. 238245, 2005.
[8] S. Kullback, Information Theory and Statistics. New York: Wiley, 1959.
[9] www.stanford.edu/dept/msande/facultysaunders , 2002.
[10] T.M. Cover and P.E. Hart, “Nearest Neighbor Pattern Classification,” IEEE Trans. Information Theory, vol. 13, pp. 2127, 1967.
[11] T.M. Cover, “Estimation by the NearestNeighbor Rule,” IEEE Trans. Information Theory, vol. 14, no. 1, pp. 5055, 1968.
[12] R.M. Gray, Entropy and Information Theory. New York: SpringerVerlag, 1990.
[13] E.T. Jaynes, “On the Rationale of Maximum Entropy Methods,” Proc. IEEE, vol. 70, no. 9, pp. 939952, 1982.
[14] T. Cover and J. Thomas, Elements of Information Theory. John Wiley and Sons, 1991.
[15] N. Wu, The Maximum Entropy Method. Berlin: SpringerVerlag, 1997.
[16] T.C. Hesterberg, “The Bootstrap and Empirical Likelihood,” Proc. Section Statistical Computing, Am. Statistical Assoc., pp. 3436, 1997.
[17] T. Kohonen, G. Barna, and R. Chrisley, “Statistical Pattern Recognition with Neural Networks: Benchmarking Studies,” IEEE Int'l Conf. Neural Networks, vol. 1, pp. 6168, 1988.
[18] R.M. Gray and R.A. Olshen, “Vector Quantization and Density Esimation,” Proc. Compression and Complexity of Sequences Conf., pp. 172193, 1997.
[19] L. Devroye, L. Gyorfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition. New York: SpringerVerlag, 1996.
[20] J. Rice, “Boundary Modification for Kernel Regression,” Comm. Statistics, Theory, and Methods, vol. 13, pp. 893900, 1984.
[21] T. Hastie and C. Loader, “Local Regression: Automatic Kernel Carpentry,” Statistical Science, vol. 8, no. 2, pp. 120143, 1993.
[22] B. Ripley, Pattern Recognition and Neural Nets. Cambridge: Cambridge Univ. Press, 2001.
[23] J.H. Friedman, “On Bias, Variance, 0/1 Loss, and the CurseofDimensionality,” Data Mining and Knowledge Discovery, vol. 1, no. 1, pp. 5577, 1997.
[24] D.B. O'Brien, M.R. Gupta, and R.M. Gray, “Analysis and Classification of Internal Pipeline Images,” Proc. IEEE Int'l Conf. Image Processing, 2003.
[25] C.S. Peirce, The Philosophy of Peirce: Selected Writings. Jarrold and Sons Limited, 1956.
[26] W. Kneale, Probability and Induction. Oxford: Clarendon Press, 1949.
[27] H. Kang, Color Technology for Electronic Imaging Devices. SPIE Press, 1997.
[28] “matlab version 6. 1 by Mathworks,” www.matlab.com, 2002.
[29] W.H. Press, W.T. Vetterling, S.A. Teukolsky, and B.P. Flannery, Numerical Recipes in C, second ed. Cambridge Univ. Press, 1999.
[30] G. Lugosi and K. Zeger, “Concept Learning Using Complexity Regularization,” IEEE Trans. Information Theory, vol. 42, pp. 4854, 1996.
[31] A.R. Barron and T. Cover, “Minimum Complexity Density Estimation,” IEEE Trans. Information Theory, vol. 37, pp. 10341054, 1991.
[32] A. Najmi, “Data Compression, Model Selection and Statistical Inference,” PhD dissertation, Stanford Univ., Stanford, Calif., 1999.
[33] C.J. Stone, “Optimal Global Rates of Convergence for Nonparametric Regression,” Annals of Statistics, vol. 10, pp. 10401053, 1982.
[34] P.J. Bickel and L. Breiman, “Sums of Functions of Nearest Neighbor Distances, Moment Bounds, Limit Theorems and a Goodness of Fit Test,” Annals of Probability, vol. 11, no. 1, pp. 185214, 1983.
[35] L. Breiman, J. Friedman, R.A. Olshen, and C.J. Stone, Classification and Regression Trees. Chapman and Hall, 1984.
[36] L. Gordon and R.A. Olshen, “Almost Surely Consistent Nonparametric Regression from Recursive Partitioning Schemes,” J. Multivariate Analysis, vol. 15, pp. 146163, 1984.
[37] P. Bickel and R.R. Bahadur, “Substitution in Conditional Expectation,” Annals of Math. Statistics, vol. 39, pp. 442456, 1968.
[38] M. de Guzmán, Differentiation of Integrals in $R^n$ . Berlin: Springer Verlag, 1975.
[39] A. Garsia, Topics in Almost Everywhere Convergence. Chicago: Markham, 1970.
[40] D. Pollard, Convergence of Stochastic Processes. New York: Springer Verlag, 1984.