Charting the Design Space of Neural Graph Representations for Subgraph Matching
By: Vaibhav Raj , Indradyumna Roy , Ashwin Ramachandran and more
Potential Business Impact:
Finds patterns in complex data faster.
Subgraph matching is vital in knowledge graph (KG) question answering, molecule design, scene graph, code and circuit search, etc. Neural methods have shown promising results for subgraph matching. Our study of recent systems suggests refactoring them into a unified design space for graph matching networks. Existing methods occupy only a few isolated patches in this space, which remains largely uncharted. We undertake the first comprehensive exploration of this space, featuring such axes as attention-based vs. soft permutation-based interaction between query and corpus graphs, aligning nodes vs. edges, and the form of the final scoring network that integrates neural representations of the graphs. Our extensive experiments reveal that judicious and hitherto-unexplored combinations of choices in this space lead to large performance benefits. Beyond better performance, our study uncovers valuable insights and establishes general design principles for neural graph representation and interaction, which may be of wider interest.
Similar Papers
Native Logical and Hierarchical Representations with Subspace Embeddings
Machine Learning (CS)
Computers understand words and their meanings better.
GraphMatch: Fusing Language and Graph Representations in a Dynamic Two-Sided Work Marketplace
Machine Learning (CS)
Finds best job matches faster by understanding words and connections.
A customizable inexact subgraph matching algorithm for attributed graphs
Data Structures and Algorithms
Finds hidden patterns in messy data relationships.