
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Jorge Silva, Rebecca Willett, "HypergraphBased Anomaly Detection of HighDimensional CoOccurrences," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 3, pp. 563569, March, 2009.  
BibTex  x  
@article{ 10.1109/TPAMI.2008.232, author = {Jorge Silva and Rebecca Willett}, title = {HypergraphBased Anomaly Detection of HighDimensional CoOccurrences}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {31}, number = {3}, issn = {01628828}, year = {2009}, pages = {563569}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2008.232}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  HypergraphBased Anomaly Detection of HighDimensional CoOccurrences IS  3 SN  01628828 SP563 EP569 EPD  563569 A1  Jorge Silva, A1  Rebecca Willett, PY  2009 KW  Anomaly detection KW  Cooccurrence data KW  Unsupervised learning KW  Variational methods KW  False Discovery Rate VL  31 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] A. Ozgur, B. Cetin, and H. Bingol, “CoOccurrence Network of Reuters News,” http://arxiv.org/abs0712.2491, Dec. 2007.
[2] A. Globerson, G. Chechik, F. Pereira, and N. Tishby, “Euclidean Embedding of CoOccurrence Data,” J. Machine Learning Research, vol. 8, pp. 22652295, 2007.
[3] N. Jhanwar, S. Chaudhuri, G. Seetharaman, and B. Zavidovique, “Content Based Image Retrieval Using Motif Cooccurrence Matrix,” Proc. Fourth Indian Conf. Computer Vision, Graphics and Image Processing, vol. 22, no. 14, pp. 12111220, 2004.
[4] E. Garcia, “Targeting Documents and Terms: Using CoOccurrence Data, Answer Sets and Probability Theory,” http://www.miislita.com/semanticscindex3.html , May 2008.
[5] M. Li, B. Dias, W. ElDeredy, and P.J.G. Lisboa, “A Probabilistic Model for ItemBased Recommender Systems,” Proc. ACM Int'l Conf. Recommender Systems), 2007.
[6] H. Li and N. Abe, “Word Clustering and Disambiguation Based on CoOccurrence Data,” Proc. 19th Int'l Conf. Computational Linguistics, 2002.
[7] M. Rabbat, M. Figueiredo, and R. Nowak, “Network Inference from CoOccurrences,” IEEE Trans. Information Theory, vol. 54, no. 9, pp. 40534068, 2008.
[8] P.D. Hoff, A.E. Raftery, and M.S. Handcock, “Latent Space Approaches to Social Network Analysis,” J. Am. Statistical Assoc., vol. 97, no. 460, pp. 10901099, 2002.
[9] M.E.J. Newman, “The Structure and Function of Complex Networks,” SIAM Rev., vol. 45, pp. 167256, 2003.
[10] T. Hofmann and J. Puzicha, “Statistical Models for CoOccurrence Data,” Technical Report AIM1625, Massachusetts Inst. of Technology, citeseer. ist.psu.edu/articlehofmann98statistical.html , 1998.
[11] C. Berge, Hypergraphs: Combinatorics of Finite Sets. North Holland, 1989.
[12] W. Lee and S. Stolfo, “Data Mining Approaches for Intrusion Detection,” Proc. Seventh Usenix Security Symp., 1998.
[13] N. Ye and Q. Chen, “An Anomaly Detection Technique Based on a ChiSquare Statistic for Detecting Intrusions into Information Systems,” Quality and Reliability Eng. Int'l, vol. 17, pp. 105112, 2001.
[14] A. Lazarevic, L. Ertoz, V. Kumar, A. Ozgur, and J. Srivastava, “A Comparative Study of Anomaly Detection Schemes in Network Intrusion Detection,” Proc. Third SIAM Int'l Conf. Data Mining, May 2003.
[15] T. Ahmed, B. Oreshkin, and M. Coates, “Machine Learning Approaches to Network Anomaly Detection,” Proc. Second Workshop Tackling Computer Systems Problems with Machine Learning, Apr. 2007.
[16] B. Schölkopf, J.C. Platt, J. ShawneTaylor, A.J. Smola, and R.C. Williamson, “Estimating the Support of a HighDimensional Distribution,” Neural Computation, vol. 13, pp. 14431471, 2001.
[17] E. Eskin, A. Arnold, M. Prerau, L. Portnoy, and S. Stolfo, “A Geometric Framework for Unsupervised Anomaly Detection: Detecting Intrusions in Unlabeled Data,” Applications of Data Mining in Computer Security, D.Barbara and S. Jajodia, eds., chapter 4, Kluwer Academic, 2002.
[18] J. Aitchison and C.G.G. Aitken, “Multivariate Binary Discrimination by the Kernel Method,” Biometrika, vol. 63, pp. 413420, 1976.
[19] D.W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization. John Wiley & Sons, 1992.
[20] C. Scott and E. Kolaczyk, “Nonparametric Assessment of Contamination in Multivariate Data Using Minimum Volume Sets and FDR,” technical report, Univ. of Michigan, 2007.
[21] A.O. Hero, “Geometric Entropy Minimization (GEM) for Anomaly Detection and Localization,” Advances in Neural Information Processing Systems, 2007.
[22] J. Storey, “The Positive False Discovery Rate: A Bayesian Interpretation of the $q$ Value,” Annals of Statistics, vol. 31, no. 6, pp. 20132035, 2003.
[23] R. ElYaniv and M. Nisenson, “Optimal SingleClass Classification Strategies,” Advances in Neural Information Processing Systems, 2007.
[24] A. McCallum and K. Nigam, “A Comparison of Event Models for Naïve Bayes Text Classification,” Proc. AAAI Workshop Learning for Text Categorization, Technical Report WS9805, 1998.
[25] K. Humphreys and D.M. Titterington, “Improving the MeanField Approximation in Belief Networks Using Bahadur's Reparameterisation of the Multivariate Binary Representation,” Neural Processing Letters, vol. 12, pp. 183197, 2000.
[26] M.J. Wainwright and M.I. Jordan, “Graphical Models, Exponential Families, and Variational Inference,” technical report, Dept. of Statistics, Univ. of California, Berkeley, 2003.
[27] G. Beylkin, J. Garcke, and M.J. Mohlenkamp, “Multivariate Regression and Machine Learning with Sums of Separable Functions,” submitted, 2007.
[28] G. McLachlan and D. Peel, Finite Mixture Models. John Wiley & Sons, 2000.
[29] G. McLachlan and T. Krishnan, The EM Algorithm and Extensions. WileyInterscience, 1996.
[30] J.H. Wolfe, “Pattern Clustering by Multivariate Mixture Analysis,” Multivariate Behavioral Research, vol. 5, pp. 329350, 1970.
[31] W.K. Hastings, “Monte Carlo Sampling Methods Using Markov Chains and Their Applications,” Biometrika, vol. 57, no. 1, pp. 97109, 1970.
[32] N. Atienza, J. GarcíaHeras, J.M. MuñozPichardo, and R. Villa, “On the Consistency of MLE in Finite Mixture Models of Exponential Families,” J.Statistical Planning and Inference, vol. 137, pp. 496505, 2007.
[33] D.M. Titterington, A.F.M. Smith, and U.E. Makov, Statistical Analysis of Finite Mixture Distributions. John Wiley & Sons, 1985.
[34] R.A. Redner and H.F. Walker, “Mixture Densities, Maximum Likelihood and the EM Algorithm,” SIAM Rev., vol. 26, pp. 195239, 1984.
[35] J. Silva and R. Willett, “HypergraphBased Anomaly Detection in Very Large Networks,” Technical Report ECE200801, Duke Univ., 2008.
[36] B. Klimt and Y. Yang, “The Enron Corpus: A New Dataset for EMail Classification Research,” Proc. 15th European Conf. Machine Learning, 2004.
[37] R. Abelson, “Enron's Many Strands: ExChief's Holdings; Putting 'Lost Everything' in Perspective,” New York Times, Jan. 2002.
[38] C.C. Chang and C.J. Lin, “LIBSVM: A Library for Support Vector Machines,” http://www.csie.ntu.edu.tw/cjlinlibsvm, 2001.
[39] A. Ng and M.I. Jordan, “On Discriminative versus Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes,” Advances in Neural Information Processing Systems, 2002.