Stop Thinking, Just Do!

Sungsoo Kim's Blog

The Information Bottleneck Approaches in Deep Neural Networks

tagsTags

1 March 2024


Article Source


The Information Bottleneck Approaches in Deep Neural Networks

Abstract

The goal of machine learning is to use data to obtain simple algorithms for predicting a random variable Y from a correlated observation X. Since the dimension of X is typically huge, computationally feasible solutions should summarize it into a lower-dimensional feature vector T, from which Y is predicted. A notable learning principle to achieve this goal is called the Information Bottleneck. This tutorial introduces the general idea behind information bottleneck and discusses its several variants. We will then introduce the neural network parameterization of the IB principle and discuss its applications in problems involving neural network interpretability, domain generalization and adaptation, adversarial robustness and graph neural networks. Shujian Yu (UiT) leads this last tutorial (10th Jan. 2022).


comments powered by Disqus