All Dates/Times are Australian Eastern Standard Time (AEST)

Technical Program

Paper Detail

Paper IDD1-S5-T4.2
Paper Title Differentially Private Federated Learning with Shuffling and Client Self-Sampling
Authors Antonious Girgis, Deepesh Data, Suhas Diggavi, UCLA, United States
Session D1-S5-T4: Differential Privacy I
Chaired Session: Monday, 12 July, 23:20 - 23:40
Engagement Session: Monday, 12 July, 23:40 - 00:00
Abstract This paper studies a distributed optimization problem in the federated learning (FL) framework under differential privacy constraints, whereby a set of clients having local samples are connected to an untrusted server, who wants to learn a global model while preserving the privacy of clients' local datasets. We propose a new client sampling called \textit{self-sampling} that reflects the random availability of clients in the learning process in FL. We analyze the differential privacy of the SGD with client self-sampling by composing amplification by sampling along with amplification by shuffling. Furthermore, we analyze the convergence of the proposed SGD algorithm showing that we can get a reasonable learning performance while preserving the privacy of clients' data even with client self-sampling.