All Dates/Times are Australian Eastern Standard Time (AEST)

Technical Program

Paper Detail

Paper IDD4-S5-T3.1
Paper Title On Dimension in Graph Convolutional Networks for Distinguishing Random Graph Models
Authors Abram Magner, University at Albany, State University of New York, United States
Session D4-S5-T3: Network Inference
Chaired Session: Thursday, 15 July, 23:20 - 23:40
Engagement Session: Thursday, 15 July, 23:40 - 00:00
Abstract Graph convolutional networks are a popular representation learning method for graphs, wherein an input graph is mapped to a $d$-dimensional \emph{embedding vector}, yielding a latent representation. We continue the project of theoretically elucidating the roles of various aspects of GCN architectures by studying the power and limitations of GCNs in distinguishing random graph models based on embedding vectors of sample graphs. In the present work, we show how the embedding dimension affects the set of pairs of models that can be distinguished from one another. We also consider the application of GCNs to multi-hypothesis testing and use channel capacity results to show a lower bound on how the embedding dimension must scale with respect to the number of hypotheses and the signal-to-noise ratio in order to guarantee a probability of error tending to $0$.