This Article 
 Bibliographic References 
 Add to: 
Preserving Privacy by De-Identifying Face Images
February 2005 (vol. 17 no. 2)
pp. 232-243
Latanya Sweeney, IEEE Computer Society
In the context of sharing video surveillance data, a significant threat to privacy is face recognition software, which can automatically identify known people, such as from a database of drivers' license photos, and thereby track people regardless of suspicion. This paper introduces an algorithm to protect the privacy of individuals in video surveillance data by de-identifying faces such that many facial characteristics remain but the face cannot be reliably recognized. A trivial solution to de-identifying faces involves blacking out each face. This thwarts any possible face recognition, but because all facial details are obscured, the result is of limited use. Many ad hoc attempts, such as covering eyes, fail to thwart face recognition because of the robustness of face recognition methods. This paper presents a new privacy-enabling algorithm, named k-Same, that guarantees face recognition software cannot reliably recognize de-identified faces, even though many facial details are preserved. The algorithm determines similarity between faces based on a distance metric and creates new faces by averaging image components, which may be the original image pixels (k--Same-Pixel) or eigenvectors (k-Same-Eigen). Results are presented on a standard collection of real face images with varying k.

[1] D. Agrawal and C. Aggarwal, “On the Design and Quantification of Privacy Preserving Data Mining Algorithms,” Proc. ACM Symp. Principles of Database Systems, pp. 247-255, 2001.
[2] R. Agrawal and S. Ramakrishnan, “Privacy-Preserving Data Mining,” Proc. ACM SIGMOD, pp. 439-450, 2000.
[3] A.J. Allin, C.G. Atkeson, H. Wactlar, S. Stevens, M.J. Robertson, D. Wilson, J. Zimmerman, and A. Bharucha, “Toward the Automatic Assessment of Behavioral Disturbances of Dementia,” Proc. Fifth Int'l Conf. Ubiquitous Computing (UbiComp '03), 2003.
[4] M. Atallah, E. Bertino, A. Elmagarmid, M. Ibrahim, and V. Verykios, “Disclosure Limitation of Sensitive Rules,” Proc IEEE Knowledge and Data Eng. Workshop (KDEX), pp. 45-72, 1999.
[5] D. Blackburn, J. Bone, and P. Phillips, “Face Recognition Vendor Test (FRVT) 2000 Evaluation Report,” NIST, Bethesda, Md., technical report, documents.htm , 2001.
[6] Y. Cai, “Visual Privacy,” presented at Topics in Privacy, Carnegie Mellon Univ., Pittsburgh, Pa, 2003.
[7] R. Gross, J. Cohn, and J. Shi, “Quo Vadis Face Recognition,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2001.
[8] Institute for Applied Autonomy, iSee Project: Survey and Web-Based Application Charting the Locations of Closed-Circuit Television (CCTV) Surveillance Cameras in Urban Environments, , 2003.
[9] M. Jones, “All Eyes Are on Oceanfront's New Surveillance System,” The Virginian-Pilot, 10 Sept. 2002.
[10] J. Lampinen and E. Oja, “Distortion Tolerant Pattern Recognition Based on Self-Organizing Feature Extraction,” IEEE Trans. Neural Networks, vol. 6, no. 3, pp. 539-547, May 1995.
[11] H. Moon and P. Phillips, “Computational and Performance Aspects of PCA-Based Face Recognition Algorithms,” Perception, vol. 30, no. 3, pp. 303-321, Mar. 2001.
[12] E. Newton, L. Sweeney, and B. Malin, “Preserving Privacy by De-Identifying Facial Images,” Carnegie Mellon Univ., School of Computer Science, Pittsburgh, Pa, Technical Report CMU-CS-03-119, Mar. 2003.
[13] A. Oganian and J. Domingo-Ferrer, “On the Complexity of Microaggregation,” Proc. Joint Economic Commission for Europe, Work Session on Statistics Data Confidentiality, 2001.
[14] P. Phillips, H. Moon, S. Rizvi, and P. Rauss, “The FERET Evaluation Methodology for Face-Recognition Algorithms,” IEEE Trans. Pattern Recognition and Machine Intelligence, vol. 22, no. 10, pp. 1090-1104, Oct. 2000.
[15] P. Phillips, H. Wechsler, J. Huang, and P. Rauss, “The FERET Database and Evaluation Procedure for Face Recognition Algorithms,” Image and Vision Computing J., vol. 16, no. 5, pp. 295-306, Mar. 1998.
[16] J. Reeves, “Tampa Gets Ready for Its Closeup,” Time, 16 July 2001.
[17] S. Rizvi and J. Haritsa, “Maintaining Data Privacy in Association Rule Mining,” Proc. 28th Conf. Very Large Data Base (VLDB '02), 2002.
[18] Y. Saygin, V. Verykios, and C. Clifton, “Using Unknowns to Prevent Discovery of Association Rules,” SIGMOD Record, vol. 30, no. 4, pp. 45-54, Dec. 2001.
[19] Y. Saygin, V. Verykios, and A. Elmagarmid, “Privacy Preserving Association Rule Mining,” Proc. 12th Int'l Workshop Research Issues in Data Eng. (RIDE), 2002.
[20] L. Sirovich and M. Kirby, “Low-Dimensional Procedure for the Characterization of Human Faces,” J. Optical Soc. of Am., vol. 4, pp. 519-524, 1987.
[21] L. Sweeney, “k-Anonymity: A Model for Protecting Privacy,” Proc. Int'l J. Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 10, no. 5, pp. 557-570, 2002.
[22] L. Sweeney, “Report on the Potential of Early Detection Bio-terroism Surveillance Using Non-Traditional Sources,” presented at the Bio-ALIRT Program, Defense Advanced Research Projects Agency, Washington, DC, 2003.
[23] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.
[24] B. Wall, “Digitizing Facial Features Fails to Prevent Identification of Plaintiff,” USA Today, 10 Mar. 1996.
[25] E.W. Weisstein, CRC Concise Encyclopedia of Mathematics, second ed. Boca Raton, Fla.: CRC Press, 2002.

Index Terms:
Video surveillance, privacy, privacy-preserving data mining, k-anonymity.
Elaine M. Newton, Latanya Sweeney, Bradley Malin, "Preserving Privacy by De-Identifying Face Images," IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 2, pp. 232-243, Feb. 2005, doi:10.1109/TKDE.2005.32
Usage of this product signifies your acceptance of the Terms of Use.