All Dates/Times are Australian Eastern Standard Time (AEST)

Technical Program

Paper Detail

Paper IDD7-S4-T3.1
Paper Title The Impact of Split Classifiers on Group Fairness
Authors Hao Wang, Hsiang Hsu, Harvard University, United States; Mario Diaz, Universidad Nacional Autónoma de México, Mexico; Flavio P. Calmon, Harvard University, United States
Session D7-S4-T3: Learning & Side Information
Chaired Session: Tuesday, 20 July, 23:00 - 23:20
Engagement Session: Tuesday, 20 July, 23:20 - 23:40
Abstract Disparate treatment occurs when a machine learning model produces different decisions for groups of individuals based on a sensitive attribute (e.g., age, sex). In domains where prediction accuracy is paramount, it could potentially be acceptable to fit a model which exhibits disparate treatment. To evaluate the effect of disparate treatment, we compare the performance of split classifiers (i.e., classifiers trained and deployed separately on each group) with group-blind classifiers (i.e., classifiers which do not use a sensitive attribute). We introduce the benefit-of-splitting for quantifying the performance improvement by splitting classifiers when the underlying data distribution is known. Computing the benefit-of-splitting directly from its definition involves solving optimization problems over an infinite-dimensional functional space. Under different performance measures, we (i) prove an equivalent expression for the benefit-of-splitting which can be efficiently computed by solving small-scale convex programs; (ii) provide sharp upper and lower bounds for the benefit-of-splitting which reveal precise conditions where a group-blind classifier will always suffer from a non-trivial performance gap from the split classifiers.