The Community for Technology Leaders
Green Image
Issue No. 08 - August (2004 vol. 16)
ISSN: 1041-4347
pp: 939-948
<p><b>Abstract</b>—The problem of disseminating a data set for machine learning while controlling the disclosure of data source identity is described using a commuting diagram of functions. This formalization is used to present and analyze an optimization problem balancing privacy and data utility requirements. The analysis points to the application of a generalization mechanism for maintaining privacy in view of machine learning needs. We present new proofs of NP-hardness of the problem of minimizing information loss while satisfying a set of privacy requirements, both with and without the addition of a particular uniform coding requirement. As an initial analysis of the approximation properties of the problem, we show that the cell suppression problem with a constant number of attributes can be approximated within a constant. As a side effect, proofs of NP-hardness of the minimum <tmath>k{\hbox{-}}{\rm{union}}</tmath>, maximum <tmath>k{\hbox{-}}{\rm{intersection}}</tmath>, and parallel versions of these are presented. Bounded versions of these problems are also shown to be approximable within a constant.</p>
Privacy, disclosure control, combinatorial optimization, complexity, approximation properties, machine learning.

S. A. Vinterbo, "Privacy: A Machine Learning View," in IEEE Transactions on Knowledge & Data Engineering, vol. 16, no. , pp. 939-948, 2004.
91 ms
(Ver 3.3 (11022016))