Paper ID | D1-S1-T3.1 |
Paper Title |
Scalable Vector Gaussian Information Bottleneck |
Authors |
Mohammad Mahdi Mahvari, Mari Kobayashi, Technical University of Munich, Germany; Abdellatif Zaidi, Universite Paris-Est, France |
Session |
D1-S1-T3: Information Bottleneck I |
Chaired Session: |
Monday, 12 July, 22:00 - 22:20 |
Engagement Session: |
Monday, 12 July, 22:20 - 22:40 |
Abstract |
In the context of statistical learning, the Information Bottleneck (IB) method seeks a right balance between accuracy and generalization capability through a suitable tradeoff between compression complexity, measured by minimum description length, and distortion evaluated under logarithmic loss measure. In this paper, we study a variation of the problem, called scalable information bottleneck, in which the encoder outputs multiple descriptions of the observation with increasingly richer features. The model, which is of successive-refinement type with degraded side information streams at the decoders, is motivated by some application scenarios that require varying levels of accuracy depending on the allowed level of complexity. We establish an analytic characterization of the optimal relevance-complexity region for vector Gaussian sources. Then, we derive a variational inference type algorithm for general sources with unknown distribution; and show means of parametrizing it using neural networks. Finally, we provide experimental results on the MNIST dataset which illustrate that the proposed method generalizes better to unseen data compared to the standard IB with a single description.
|