|| Differentially Private Federated Learning: An Information-Theoretic Perspective
||Shahab Asoodeh, Harvard University, United States; Wei-Ning Chen, Stanford University, United States; Flavio P. Calmon, Harvard University, United States; Ayfer Ozgur, Stanford University, United States|
||D1-S5-T4: Differential Privacy I
||Monday, 12 July, 23:20 - 23:40
||Monday, 12 July, 23:40 - 00:00
In this work, we propose a new technique for deriving the differential privacy parameters in the context of federated learning (FL) when only the last update is publicly released. In this approach, we interpret each iteration as a Markov kernel and quantify its impact on privacy parameters via the contraction coefficient of a certain f-divergence that underlies differential privacy. To do so, we generalize the well-known Dobrushin's ergodicity coefficient, originally defined in terms of total variation distance, to a family of f-divergences. We then analyze the convergence rate of the stochastic gradient descent under the proposed private FL framework.