Stop Thinking, Just Do!

Sungsoo Kim's Blog

Principles and Applications of Relational Inductive Biases

tagsTags

11 November 2020


Article Source


Principles and Applications of Relational Inductive Biases in Deep Learning

  • Speaker: Kelsey Allen, MIT

Abstract

Common intuition posits that deep learning has succeeded because of its ability to assume very little structure in the data it receives, instead learning that structure from large numbers of training examples. However, recent work has attempted to bring structure back into deep learning, via a new set of models known as “graph networks”. Graph networks allow for “relational inductive biases” to be introduced into learning, ie. explicit reasoning about relationships between entities. In this talk, I will introduce graph networks and one application of them to a physical reasoning task where an agent and human participants were asked to glue together pairs of blocks to stabilize a tower. We will go through DeepMind’s recently released graph networks library (implemented in tensorflow) to see how to set up different graph models, and train some simple models on some simple tasks.

Speaker Bio

Kelsey Allen is a PhD student working with Josh Tenenbaum on the problems of structured physical reasoning, planning, and learning from limited data.

Computational tutorial references and videos can be found on our stellar site or on the CBMM learning hub.

Discovering Symbolic Models from Deep Learning with Inductive Biases

Neural networks are very good at predicting systems’ numerical outputs, but not very good at deriving the discrete symbolic equations that govern many physical systems. This paper combines Graph Networks with symbolic regression and shows that the strong inductive biases of these models can be used to derive accurate symbolic equations from observation data.


comments powered by Disqus