
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
N.S.V. Rao, E.M. Oblow, C.W. Glover, "Learning Separations by Boolean Combinations of HalfSpaces," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 7, pp. 765768, July, 1994.  
BibTex  x  
@article{ 10.1109/34.297960, author = {N.S.V. Rao and E.M. Oblow and C.W. Glover}, title = {Learning Separations by Boolean Combinations of HalfSpaces}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {16}, number = {7}, issn = {01628828}, year = {1994}, pages = {765768}, doi = {http://doi.ieeecomputersociety.org/10.1109/34.297960}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Learning Separations by Boolean Combinations of HalfSpaces IS  7 SN  01628828 SP765 EP768 EPD  765768 A1  N.S.V. Rao, A1  E.M. Oblow, A1  C.W. Glover, PY  1994 KW  neural nets; learning (artificial intelligence); separations; Boolean combinations; learning half spaces; separation function; perceptrons; consolidator; online learning algorithm VL  16 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
Given two subsets S/sub 1/ and S/sub 2/ (not necessarily finite) of /spl Rfr//sup d/ separable by a Boolean combination of learning halfspaces, the authors consider the problem of (in the sense of Valiant) the separation function from a finite set of examples, i.e., they produce with a high probability a function close to the actual separating function. The authors' solution consists of a system of N perceptrons and a single consolidator which combines the outputs of the individual perceptrons; it is shown that an offline version of this problem, where the examples are given in a batch, can be solved in time polynomial in the number of examples. The authors also provide an online learning algorithm that incrementally solves the problem by suitably training a system of N perceptrons much in the spirit of the classical perceptron learning algorithm.
[1] M.F. Barnsley,Fractals Everywhere, Academic Press, Boston, 1988.
[2] E. B. Baum, "On learning a union of half spaces,"J. Complexity, vol. 6, pp. 67101, 1990.
[3] J. L. Bentley and M. I. Shamos, "Divideandconquer in multidimensional space," inProc. 8th ACM Annu. Symp. Theory of Computing, 1976, pp. 220230.
[4] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth, "Learnability and the VapnikChervonenkis dimension,"J. Ass. Comput. Mach., vol. 36, no. 4, pp. 929965, Oct. 1989.
[5] T. Cover, "Geometrical and statistical properties of systems of linear inequalities with applications to pattern recognition,"IEEE Trans. Elect. Comput., vol. 14, pp. 326334, 1965.
[6] B. V. Dasarathy and B. V. Sheela, "A composite classifier system design: Concepts and methodology,"Proc. IEEE, vol. 67, no. 5, pp. 708713, May 1979.
[7] H. Edelsbrunner,Algorithms in Combinatorial Geometry, SpringerVerlag, New York, 1987.
[8] K. Fukunaga,Introduction to Statistical Pattern Recognition. New York: Academic, 1972.
[9] D. Haussler, "Generalizing the PAC model: sample size bounds from metric dimensionbased uniform convergence," inProc. 3rd Symp. Foundations of Comput. Sci., 1989, pp. 4045.
[10] G. T. Herman and K. T. Daniel Yeung, "On piecewisselinear classification,"IEEE Trans. Pattern Anal. Machine Intell., vol. 14, no. 7, pp. 782786, 1992.
[11] N. Karmarker, "A New PolynomialTime Algorithm for Linear Programming,"Combinatorica, Vol. 4, No. 4, 1984, pp. 373395.
[12] L. G. Khachiyan, "Polynomial algorithm for linear programming,"Dokl. Akad. Nauk SSSR, vol. 244, pp. 10931096, 1979 (in Russian).
[13] N. Littlestone, "Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm,"Machine Learning, vol. 2, pp. 285318, 1988.
[14] N. Megiddo, "On the complexity of polyhedral separability,"Discrete and Computat. Geometry, vol. 3, pp. 325337, 1987.
[15] M. L. Minsky and S. A. Papert, Perceptrons. Cambridge, MA: MIT Press, 1988.
[16] B. K. Natarajan, "On learning sets and functions,"Machine Learning, vol. 4, pp. 6797, 1989.
[17] N. J. Nilsson,Learning Machines. New York: McGrawHill, 1965.
[18] C. H. Papadimitriou and K. Steiglitz,Combinatorial Optimization: Algorithms and Complexity. Englewood Cliffs, NJ: PrenticeHall, 1982.
[19] F. P. Preparata and M. I. Shamos,Computational Geometry, an Introduction. New York: SpringerVerlag, 1985.
[20] N. S. V. Rao, E. M. Oblow, C. Glover, and G. E. Liepins, "Nlearners problem: Fusion of Concepts,"IEEE Trans. Syst., Man Cybern., vol. 24, no. 2, pp. 319327, Feb. 1994.
[21] N. S. V. Rao and E. M. Oblow, "Majority and locationbased fusers for systems of PAC learners," to appear inIEEE Trans. Syst., Man Cybern., vol. 24, no. 5, pp. 713727, 1994.
[22] J. Slansky and G. N. Wassel,Pattern Classifiers and Trainable Machines. New York: SpringerVerlag, 1981.
[23] L. G. Valiant, "A theory of the learnable,"Comm. ACM, vol. 27, pp. 11341142, Nov. 1984.
[24] V. N. Vapnik,Estimation of Dependences Based on Empirical Data. New York: SpringerVerlag, 1982.