Stop Thinking, Just Do!

Sungsoo Kim's Blog

GNNs with Learnable Structural and Positional Representation

tagsTags

13 September 2021


Article Source


Graph Neural Networks with Learnable Structural and Positional Representation

Abstract

Graph neural networks have become the standard toolkit for analyzing and learning from data on graphs. GNNs have been applied to several domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce positional encoding (PE) of nodes, and inject it into the input layer, like in Transformer. Possible graph PE are graph Laplacian eigenvectors, but their sign is not uniquely defined. In this work, we propose to decouple structural and positional representation to make easy for the network to learn these two properties. We show that any GNN can actually be augmented by learnable PE, and improve the performance. We investigate sparse and fully-connected/transformer-like GNNs, and observe the usefulness to learn PE for both classes.


comments powered by Disqus