Paper ID | D3-S5-T3.1 |
Paper Title |
Differentially-Private Federated Learning with Long-Term Constraints Using Online Mirror Descent |
Authors |
olusola Odeyomi, Gergely Zaruba, Wichita State University, United States |
Session |
D3-S5-T3: Privacy & Learning |
Chaired Session: |
Wednesday, 14 July, 23:20 - 23:40 |
Engagement Session: |
Wednesday, 14 July, 23:40 - 00:00 |
Abstract |
This paper discusses a fully decentralized online federated learning setting with long-term constraints. The fully decentralized setting removes communication and computational bottlenecks associated with a central server communicating with a large number of clients. Also, online learning is introduced to the federated learning setting to capture a time-varying data distribution. Practical federated learning settings are imposed with long-term constraints such as energy constraints, money cost constraints, time constraints etc. The clients are not obligated to satisfy any per round constraint, but they must satisfy these long-term constraints. To provide an additional layer of privacy, local differential privacy is introduced. An online mirror descent-based algorithm is proposed and its regret bound is obtained. The regret bound is compared with the regret bound of a differentially-private version of online gradient descent algorithm proposed for federated learning.
|