|| An Information Theoretic Framework for Distributed Learning Algorithms
||Xiangxiang Xu, Shao-Lun Huang, Tsinghua-Berkeley Shenzhen Institute, China|
||D1-S5-T3: Distributed Learning
||Monday, 12 July, 23:20 - 23:40
||Monday, 12 July, 23:40 - 00:00
Distributed learning is recently an important research topic, while the information theoretic optimality of the distributed learning algorithms is often not sufficiently addressed. This paper studies the distributed learning problems such that each node observes i.i.d. samples and sends a feature function of observed samples to the central machine for decision making. Both the binary hypothesis testing in information theory and the classification problems in machine learning are considered, and the optimal error exponent and the set of optimal features are characterized. By exploiting an information theoretic framework, we show that these two problems share the same set of optimal features, from which the information theoretic optimality of some machine learning algorithms can be established. Finally, we generalize our analyses to $M$-ary distributed hypothesis testing and classification problems.