Universal generalization guarantees for Wasserstein distributionally robust models - Optimization and learning for Data Science
Pré-Publication, Document De Travail Année : 2024

Universal generalization guarantees for Wasserstein distributionally robust models

Tam Le
  • Fonction : Auteur
  • PersonId : 752715
  • IdHAL : tam-le
Jérôme Malick

Résumé

Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that generalization guarantees of robust models based on the Wasserstein distance have generalization guarantees that do not suffer from the curse of dimensionality. However, these results are either approximate, obtained in specific cases, or based on assumptions difficult to verify in practice. In contrast, we establish exact generalization guarantees that cover a wide range of cases, with arbitrary transport costs and parametric loss functions, including deep learning objectives with nonsmooth activations. We complete our analysis with an excess bound on the robust objective and an extension to Wasserstein robust models with entropic regularizations.
Fichier principal
Vignette du fichier
iclr2025preprintHAL.pdf (501.79 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04460543 , version 1 (15-02-2024)
hal-04460543 , version 2 (28-05-2024)
hal-04460543 , version 3 (11-10-2024)

Identifiants

Citer

Tam Le, Jérôme Malick. Universal generalization guarantees for Wasserstein distributionally robust models. 2024. ⟨hal-04460543v3⟩
433 Consultations
92 Téléchargements

Altmetric

Partager

More