Transferability of Graph Neural Networks using Graphon and Sampling Theories

Martina Neuman
University of Vienna

Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains. A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy. A recent method of capturing transferability of GNNs is through the use of graphons, which are symmetric, measurable functions representing the limit of large dense graphs. In this talk, I will present an explicit two-layer graphon neural network (WNN) architecture for approximating bandlimited signals, and explain how a related GNN guarantees transferability between both deterministic weighted graphs and simple random graphs. The proposed WNN and GNN architectures overcome issues related to the curse of dimensionality and offer practical solutions for handling graph data of varying sizes.