All Dates/Times are Australian Eastern Standard Time (AEST)

Technical Program

Paper Detail

Paper IDD2-S3-T3.2
Paper Title Information Complexity and Generalization Bounds
Authors Pradeep Kr. Banerjee, MPI MiS, Germany; Guido Montufar, UCLA and MPI MiS, United States
Session D2-S3-T3: IT Bounds on Generalization Error
Chaired Session: Tuesday, 13 July, 22:40 - 23:00
Engagement Session: Tuesday, 13 July, 23:00 - 23:20
Abstract We present a unifying picture of PAC-Bayesian and mutual information-based upper bounds on the generalization error of randomized learning algorithms. As we show, Tong Zhang's information exponential inequality (IEI) gives a general recipe for constructing bounds of both flavors. We show that several important results in the literature can be obtained as simple corollaries of the IEI under different assumptions on the loss function. Moreover, we obtain new bounds for data-dependent priors and unbounded loss functions. Optimizing the bounds gives rise to variants of the Gibbs algorithm, for which we discuss two examples for learning with neural networks, namely, Entropy- and PAC-Bayes- SGD. Further, we use an Occam's factor argument to show a PAC-Bayesian bound that incorporates second-order curvature information of the training loss.