Stop Thinking, Just Do!

Sungsoo Kim's Blog

Contrastive Learning; A General Self-supervised Learning Approach

tagsTags

16 September 2021


Article Source


Contrastive Learning; A General Self-supervised Learning Approach

  • June 30th - MIT CSAIL

Abstract

Self-supervised learning aims at learning effective visual representations without human annotations, and is a long-standing problem. Recently, contrastive learning between multiple views of the data has significantly improved the state-of-the-art in the field of self-supervised learning. Despite its success, the influence of different view choices has been less studied. In this talk, I will firstly summarise recent progresses on contrastive representation learning from a unified multi-view perspective. Then an InfoMin principle is proposed that we should reduce the mutual information (MI) between views while keeping task-relevant information intact. To verify this hypothesis, we also devise unsupervised and semi-supervised frameworks to learn effective views. Lastly, I will extend the application of contrastive learning beyond self-supervised learning.

Bio

Yonglong Tian is currently a PhD student at MIT, working with Prof. Phillip Isola and Prof. Joshua Tenenbaum. Yonglong’s general research interests lie in the intersection of machine perception, learning and reasoning, mainly from the perspective of vision. These days he focuses more on unsupervised representation learning and visual program induction.


comments powered by Disqus