Article Source
Causal Representation Learning
Abstract
Prof. Kun Zhang, currently on leave from Carnegie Mellon University (CMU), is a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). In this talk, he gives an overview of causal representation learning and how it has evolved over time.
Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications.
PyWhy Causality in Practice: A talk series focusing on causality and machine learning, especially from a practical perspective. We’ll have tutorials and presentations about PyWhy libraries and talks by external speakers working on causal inference. https://www.pywhy.org/community/videos