Stop Thinking, Just Do!

Sungsoo Kim's Blog

Task structure and generalization in graph neural networks

tagsTags

16 June 2022


Article Source


Task structure and generalization in graph neural networks

  • Deep Learning and Combinatorial Optimization 2021
  • “Task structure and generalization in graph neural networks”
  • Stefanie Jegelka - Massachusetts Institute of Technology
  • Institute for Pure and Applied Mathematics, UCLA
  • February 25, 2021
  • For more information: https://www.ipam.ucla.edu/dlc2021​

Abstract

Graph Neural Networks (GNNs) have become a popular tool for learning certain algorithmic tasks. But, their generalization properties are less well understood. Empirically, we observe an interplay between the structure of the task — or target algorithm — and the inductive biases of the architecture: although many networks may be able to represent a task, some architectures learn it better than others. In this talk, I will show an approach to formalize this relationship, and empirical and theoretical implications for generalization within and out of the training distribution. This talk is based on joint work with Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du and Ken-ichi Kawarabayashi.


comments powered by Disqus