Stop Thinking, Just Do!

Sungsoo Kim's Blog

A Tutorial on Causal Representation Learning

tagsTags

19 October 2023


Article Source


A Tutorial on Causal Representation Learning

Abstract

Causal Representation Learning (CRL) is an emerging area of research that seeks to address an important gap in the field of causality: how can we learn causal models and mechanisms without direct measurements of all the variables? To this end, CRL combines recent advances in machine learning with new assumptions that guarantee that causal variables can be identified up to some indeterminacies from low-level observations such as text, images or biological measurements. In this tutorial, we will review the broad classes of assumptions driving CRL. We strive to build strong intuitions about the core technical problems underpinning CRL and draw connections across different results. We will conclude the tutorial by discussing open questions for CRL, motivated by the kind of methods we would need if we wanted to extend causal models to scientific discovery.


comments powered by Disqus