The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.05 - May (2007 vol.19)
pp: 711-725
ABSTRACT
Classification is a fundamental problem in data analysis. Training a classifier requires accessing a large collection of data. Releasing person-specific data, such as customer data or patient records, may pose a threat to an individual's privacy. Even after removing explicit identifying information such as Name and SSN, it is still possible to link released records back to their identities by matching some combination of nonidentifying attributes such as \{Sex, Zip, Birthdate\}. A useful approach to combat such linking attacks, called k-anonymization [1], is anonymizing the linking attributes so that at least k released records match each value combination of the linking attributes. Previous work attempted to find an optimal k-anonymization that minimizes some data distortion metric. We argue that minimizing the distortion to the training data is not relevant to the classification goal that requires extracting the structure of predication on the "future” data. In this paper, we propose a k-anonymization solution for classification. Our goal is to find a k-anonymization, not necessarily optimal in the sense of minimizing data distortion, which preserves the classification structure. We conducted intensive experiments to evaluate the impact of anonymization on the classification on future data. Experiments on real-life data show that the quality of classification can be preserved even for highly restrictive anonymity requirements.
INDEX TERMS
Privacy protection, anonymity, security, integrity, data mining, classification, data sharing.
CITATION
Benjamin C.M. Fung, Philip S. Yu, "Anonymizing Classification Data for Privacy Preservation", IEEE Transactions on Knowledge & Data Engineering, vol.19, no. 5, pp. 711-725, May 2007, doi:10.1109/TKDE.2007.1015
11 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool