Score: 2

Unsupervised Prompting for Graph Neural Networks

Published: May 22, 2025 | arXiv ID: 2505.16903v1

By: Peyman Baghershahi, Sourav Medya

Potential Business Impact:

Teaches computers to learn from graphs without examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Prompt tuning methods for Graph Neural Networks (GNNs) have become popular to address the semantic gap between pre-training and fine-tuning steps. However, existing GNN prompting methods rely on labeled data and involve lightweight fine-tuning for downstream tasks. Meanwhile, in-context learning methods for Large Language Models (LLMs) have shown promising performance with no parameter updating and no or minimal labeled data. Inspired by these approaches, in this work, we first introduce a challenging problem setup to evaluate GNN prompting methods. This setup encourages a prompting function to enhance a pre-trained GNN's generalization to a target dataset under covariate shift without updating the GNN's parameters and with no labeled data. Next, we propose a fully unsupervised prompting method based on consistency regularization through pseudo-labeling. We use two regularization techniques to align the prompted graphs' distribution with the original data and reduce biased predictions. Through extensive experiments under our problem setting, we demonstrate that our unsupervised approach outperforms the state-of-the-art prompting methods that have access to labels.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)