Paper ID | D5-S7-T3.2 |
Paper Title |
Boosting for Straggling and Flipping Classifiers |
Authors |
Yuval Cassuto, Technion, Israel; Yongjune Kim, DGIST, Korea (South) |
Session |
D5-S7-T3: Classification I |
Chaired Session: |
Saturday, 17 July, 00:00 - 00:20 |
Engagement Session: |
Saturday, 17 July, 00:20 - 00:40 |
Abstract |
Boosting is a well-known method in machine learning for combining multiple weak classifiers into one strong classifier. When used in distributed setting, accuracy is hurt by classifiers that flip or straggle due to communication and/or computation unreliability. While unreliability in the form of noisy data is well-treated by the boosting literature, the unreliability of the classifier outputs has not been explicitly addressed. Protecting the classifier outputs with an error/erasure-correcting code requires reliable encoding of multiple classifier outputs, which is not feasible in common distributed settings. In this paper we address the problem of training boosted classifiers subject to straggling or flips at classification time. We propose two approaches: one based on minimizing the usual exponential loss but in expectation over the classifier errors, and one by defining and minimizing a new worst-case loss for a specified bound on the number of unreliable classifiers.
|