Paper ID | D3-S3-T3.3 |
Paper Title |
Conditional Mutual Information-Based Generalization Bound for Meta Learning |
Authors |
Arezou Rezazadeh, Chalmers University of Technology, Sweden; Sharu Theresa Jose, Kings College London, United Kingdom; Giuseppe Durisi, Chalmers University of Technology, Sweden; Osvaldo Simeone, Kings College London, United Kingdom |
Session |
D3-S3-T3: IT Bounds on Learning |
Chaired Session: |
Wednesday, 14 July, 22:40 - 23:00 |
Engagement Session: |
Wednesday, 14 July, 23:00 - 23:20 |
Abstract |
Meta-learning optimizes an inductive bias—typically in the form of the hyperparameters of a base-learning algorithm—by observing data from a finite number of related tasks. This paper presents an information-theoretic bound on the generalization performance of any given meta-learner that builds on the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020). In the proposed extension to meta-learning, the CMI bound involves a training metasupersample obtained by first sampling 2N independent tasks from the task environment, and then drawing 2M independent training samples for each sampled task. The meta-training data fed to the meta-learner is modelled as being obtained by randomly selecting N tasks from the available 2N tasks and M training samples per task from the available 2M training samples per task. The resulting bound is explicit in two CMI terms, which measure the information that the meta-learner output and the base-learner output provide about which training data are selected, given the entire meta-supersample. Finally, we present a numerical example that illustrates the merits of the proposed bound in comparison to prior information-theoretic bounds for meta-learning.
|