Issue No. 03 - May-June (2016 vol. 14)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MSP.2016.51
Patrick McDaniel , Pennsylvania State University
Nicolas Papernot , Pennsylvania State University
Z. Berkay Celik , Pennsylvania State University
Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.
Training, Electronic mail, Data models, Training data, Computer security, Autonomous automobiles, Classification algorithms
P. McDaniel, N. Papernot and Z. B. Celik, "Machine Learning in Adversarial Settings," in IEEE Security & Privacy, vol. 14, no. 3, pp. 68-72, 2016.