Score: 0

Structured Multi-Step Reasoning for Entity Matching Using Large Language Model

Published: November 28, 2025 | arXiv ID: 2511.22832v1

By: Rohan Bopardikar, Jin Wang, Jia Zou

Potential Business Impact:

Helps computers find matching information faster.

Business Areas:
Semantic Search Internet Services

Entity matching is a fundamental task in data cleaning and data integration. With the rapid adoption of large language models (LLMs), recent studies have explored zero-shot and few-shot prompting to improve entity matching accuracy. However, most existing approaches rely on single-step prompting and offer limited investigation into structured reasoning strategies. In this work, we investigate how to enhance LLM-based entity matching by decomposing the matching process into multiple explicit reasoning stages. We propose a three-step framework that first identifies matched and unmatched tokens between two records, then determines the attributes most influential to the matching decision, and finally predicts whether the records refer to the same real-world entity. In addition, we explore a debate-based strategy that contrasts supporting and opposing arguments to improve decision robustness. We evaluate our approaches against multiple existing baselines on several real-world entity matching benchmark datasets. Experimental results demonstrate that structured multi-step reasoning can improve matching performance in several cases, while also highlighting remaining challenges and opportunities for further refinement of reasoning-guided LLM approaches.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Databases