|| Information Complexity and Generalization Bounds
||Pradeep Kr. Banerjee, MPI MiS, Germany; Guido Montufar, UCLA and MPI MiS, United States|
||D2-S3-T3: IT Bounds on Generalization Error
||Tuesday, 13 July, 22:40 - 23:00
||Tuesday, 13 July, 23:00 - 23:20
We present a unifying picture of PAC-Bayesian and mutual information-based upper bounds on the generalization error of randomized learning algorithms. As we show, Tong Zhang's information exponential inequality (IEI) gives a general recipe for constructing bounds of both flavors. We show that several important results in the literature can be obtained as simple corollaries of the IEI under different assumptions on the loss function. Moreover, we obtain new bounds for data-dependent priors and unbounded loss functions. Optimizing the bounds gives rise to variants of the Gibbs algorithm, for which we discuss two examples for learning with neural networks, namely, Entropy- and PAC-Bayes- SGD. Further, we use an Occam's factor argument to show a PAC-Bayesian bound that incorporates second-order curvature information of the training loss.