All Dates/Times are Australian Eastern Standard Time (AEST)

Technical Program

Paper Detail

Paper IDD1-S7-T3.2
Paper Title Time-Correlated Sparsification for Communication-Efficient Federated Learning
Authors Emre Ozfatura, Imperial College London, United Kingdom; Kerem Ozfatura, Ozyegin University, Turkey; Deniz Gunduz, Imperial College London, United Kingdom
Session D1-S7-T3: Federated Learning
Chaired Session: Tuesday, 13 July, 00:00 - 00:20
Engagement Session: Tuesday, 13 July, 00:20 - 00:40
Abstract Federated learning (FL) enables multiple clients to collaboratively train a shared model without disclosing their local datasets. This is achieved by exchanging local model updates with the help of a parameter server (PS). However, due to the increasing size of the trained models, the communication load due to the iterative exchanges between the clients and the PS often becomes a bottleneck in the performance. Sparse communication is often employed to reduce the communication load, where only a small subset of the model updates are communicated from the clients to the PS. In this paper, we introduce a novel timecorrelated sparsification (TCS) scheme, which builds upon the notion that sparse communication framework can be considered as identifying the most significant elements of the underlying model. Hence, TCS seeks a certain correlation between the sparse representations used at consecutive iterations in FL, so that the overhead due to encoding and transmission of the sparse representation can be significantly reduced without compromising the test accuracy. Through extensive simulations on the CIFAR- 10 dataset, we show that TCS can achieve centralized training accuracy with 100 times sparsification, and up to 2000 times reduction in the communication load when employed together with quantization