Issue No. 12 - December (2008 vol. 30)
Qingping Tao , GC Image, LLC, Lincoln
Stephen D. Scott , University of Nebraska, Lincoln
N. V. Vinodchandran , University of Nebraska, Lincoln
Thomas Takeo Osugi , Sphere Communications, Lincolnshire
Brandon Mueller , Gallup, Inc., Omaha
The multiple-instance learning (MIL) model has been successful in numerous application areas. Recently, a generalization of this model and an algorithm for it were introduced, showing significant advantages over the conventional MIL model on certain application areas. Unfortunately, that algorithm is not scalable to high dimensions. We adapt that algorithm to one using a support vector machine with our new kernel k\wedge. This reduces the time complexity from exponential in the dimension to polynomial. Computing our new kernel is equivalent to counting the number of boxes in a discrete, bounded space that contain at least one point from each of two multisets. We show that this problem is #P-complete, but then give a fully polynomial randomized approximation scheme (FPRAS) for it. We then extend k\wedge by enriching its representation into a new kernel kmin, and also consider a normalized version of k\wedge that we call k\wedge/\vee (which may or may not not be a kernel, but whose approximation yielded positive semidefinite Gram matrices in practice). We then empirically evaluate all three measures on data from content-based image retrieval, biological sequence analysis, and the musk data sets. We found that our kernels performed well on all data sets relative to algorithms in the conventional MIL model.
Machine learning, kernels, support vector machines, generalized multiple-instance learning
T. T. Osugi, Q. Tao, B. Mueller, S. D. Scott and N. V. Vinodchandran, "Kernels for Generalized Multiple-Instance Learning," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 30, no. , pp. 2084-2098, 2007.