Paper ID | D1-S5-T4.3 |
Paper Title |
Differentially Private Federated Learning: An Information-Theoretic Perspective |
Authors |
Shahab Asoodeh, Harvard University, United States; Wei-Ning Chen, Stanford University, United States; Flavio P. Calmon, Harvard University, United States; Ayfer Ozgur, Stanford University, United States |
Session |
D1-S5-T4: Differential Privacy I |
Chaired Session: |
Monday, 12 July, 23:20 - 23:40 |
Engagement Session: |
Monday, 12 July, 23:40 - 00:00 |
Abstract |
In this work, we propose a new technique for deriving the differential privacy parameters in the context of federated learning (FL) when only the last update is publicly released. In this approach, we interpret each iteration as a Markov kernel and quantify its impact on privacy parameters via the contraction coefficient of a certain f-divergence that underlies differential privacy. To do so, we generalize the well-known Dobrushin's ergodicity coefficient, originally defined in terms of total variation distance, to a family of f-divergences. We then analyze the convergence rate of the stochastic gradient descent under the proposed private FL framework.
|