Training-Free Query Optimization via LLM-Based Plan Similarity
By: Nikita Vasilenko, Alexander Demin, Vladimir Boorlakov
Potential Business Impact:
Speeds up computer searches by learning from past ones.
Large language model (LLM) embeddings offer a promising new avenue for database query optimization. In this paper, we explore how pre-trained execution plan embeddings can guide SQL query execution without the need for additional model training. We introduce LLM-PM (LLM-based Plan Mapping), a framework that embeds the default execution plan of a query, finds its k nearest neighbors among previously executed plans, and recommends database hintsets based on neighborhood voting. A lightweight consistency check validates the selected hint, while a fallback mechanism searches the full hint space when needed. Evaluated on the JOB-CEB benchmark using OpenGauss, LLM-PM achieves an average speed-up of 21% query latency reduction. This work highlights the potential of LLM-powered embeddings to deliver practical improvements in query performance and opens new directions for training-free, embedding-based optimizer guidance systems.
Similar Papers
A Query Optimization Method Utilizing Large Language Models
Databases
Makes computer searches find answers much faster.
Idea2Plan: Exploring AI-Powered Research Planning
Computation and Language
Helps computers plan science experiments from ideas.
No Free Lunch in Active Learning: LLM Embedding Quality Dictates Query Strategy Success
Computation and Language
Teaches computers to learn faster with smart word guesses.