Smooth Sensitivity for Learning Differentially-Private yet Accurate Rule Lists - LAAS-Décision et Optimisation
Pré-Publication, Document De Travail Année : 2024

Smooth Sensitivity for Learning Differentially-Private yet Accurate Rule Lists

Résumé

Differentially-private (DP) mechanisms can be embedded into the design of a machine learning algorithm to protect the resulting model against privacy leakage. However, this often comes with a significant loss of accuracy due to the noise added to enforce DP. In this paper, we aim at improving this trade-off for a popular class of machine learning algorithms leveraging the Gini impurity as an information gain criterion to greedily build interpretable models such as decision trees or rule lists. To this end, we establish the smooth sensitivity of the Gini impurity, which can be used to obtain thorough DP guarantees while adding noise scaled with tighter magnitude. We illustrate the applicability of this mechanism by integrating it within a greedy algorithm producing rule list models, motivated by the fact that such models remain understudied in the DP literature. Our theoretical analysis and experimental results confirm that the DP rule lists models integrating smooth sensitivity have higher accuracy that those using other DP frameworks based on global sensitivity, for identical privacy budgets.
Fichier principal
Vignette du fichier
0_article.pdf (1.17 Mo) Télécharger le fichier
1_reviews_from_prior_submission.pdf (208.84 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04505410 , version 1 (14-03-2024)
hal-04505410 , version 2 (04-11-2024)

Identifiants

  • HAL Id : hal-04505410 , version 2

Citer

Timothée Ly, Julien Ferry, Marie-José Huguet, Sébastien Gambs, Ulrich Aivodji. Smooth Sensitivity for Learning Differentially-Private yet Accurate Rule Lists. 2024. ⟨hal-04505410v2⟩
114 Consultations
27 Téléchargements

Partager

More