Stop Thinking, Just Do!

Sungsoo Kim's Blog

Bayesian Principles for Machine Learning

tagsTags

19 August 2023


Article Source


Bayesian Principles for Machine Learning

Abstract

Humans and animals have a natural ability to autonomously learn and quickly adapt to their surroundings. How can we design machines that do the same? In this talk, I will present Bayesian principles to bridge such gaps between humans and machines. I will show that a wide-variety of machine-learning algorithms are instances of a single learning-rule derived from Bayesian principles. The rule unravels a dual perspective yielding new mechanisms for knowledge transfer in learning machines. My hope is to convince the audience that Bayesian principles are indispensable for an AI that learns as efficiently as we do.

Bio

Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also an external professor at the Okinawa Institute of Science and Technology (OIST). Previously, he was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.

References for the first part

  • The Bayesian Learning Rule, (Preprint) M.E. Khan, H. Rue
  • Practical Deep Learning with Bayesian Principles, (NeurIPS 2019) K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R.E. Turner, R. Yokota, M.E. Khan.
  • Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models, (AIstats 2017) M.E. Khan and W. Lin

References for the second part

  • Knowledge-Adaptation Priors, (Preprint) M.E. Khan, Siddharth Swaroop
  • Continual Deep Learning by Functional Regularisation of Memorable Past, (NeurIPS 2020) P. Pan, S. Swaroop, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan
  • Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019) M.E. Khan, A. Immer, E. Abedi, M. korzepa.

comments powered by Disqus