DistRAG: Towards Distance-Based Spatial Reasoning in LLMs
By: Nicole R Schneider , Nandini Ramachandran , Kent O'Sullivan and more
Potential Business Impact:
Helps computers know how far apart places are.
Many real world tasks where Large Language Models (LLMs) can be used require spatial reasoning, like Point of Interest (POI) recommendation and itinerary planning. However, on their own LLMs lack reliable spatial reasoning capabilities, especially about distances. To address this problem, we develop a novel approach, DistRAG, that enables an LLM to retrieve relevant spatial information not explicitly learned during training. Our method encodes the geodesic distances between cities and towns in a graph and retrieves a context subgraph relevant to the question. Using this technique, our method enables an LLM to answer distance-based reasoning questions that it otherwise cannot answer. Given the vast array of possible places an LLM could be asked about, DistRAG offers a flexible first step towards providing a rudimentary `world model' to complement the linguistic knowledge held in LLMs.
Similar Papers
Spatially-Enhanced Retrieval-Augmented Generation for Walkability and Urban Discovery
Information Retrieval
Plans walking tours with city details.
DRAG: Distilling RAG for SLMs from LLMs to Transfer Knowledge and Mitigate Hallucination via Evidence and Graph-based Distillation
Computation and Language
Makes small AI smarter and more truthful.
Bilateral Spatial Reasoning about Street Networks: Graph-based RAG with Qualitative Spatial Representations
Artificial Intelligence
Helps walking directions understand turns and turns.