SCIR: A Self-Correcting Iterative Refinement Framework for Enhanced Information Extraction Based on Schema
By: Yushen Fang , Jianjun Li , Mingqian Ding and more
Potential Business Impact:
Makes computers understand information better, cheaper.
Although Large language Model (LLM)-powered information extraction (IE) systems have shown impressive capabilities, current fine-tuning paradigms face two major limitations: high training costs and difficulties in aligning with LLM preferences. To address these issues, we propose a novel universal IE paradigm, the Self-Correcting Iterative Refinement (SCIR) framework, along with a Multi-task Bilingual (Chinese-English) Self-Correcting (MBSC) dataset containing over 100,000 entries. The SCIR framework achieves plug-and-play compatibility with existing LLMs and IE systems through its Dual-Path Self-Correcting module and feedback-driven optimization, thereby significantly reducing training costs. Concurrently, the MBSC dataset tackles the challenge of preference alignment by indirectly distilling GPT-4's capabilities into IE result detection models. Experimental results demonstrate that SCIR outperforms state-of-the-art IE methods across three key tasks: named entity recognition, relation extraction, and event extraction, achieving a 5.27 percent average improvement in span-based Micro-F1 while reducing training costs by 87 percent compared to baseline approaches. These advancements not only enhance the flexibility and accuracy of IE systems but also pave the way for lightweight and efficient IE paradigms.
Similar Papers
RAIR: Retrieval-Augmented Iterative Refinement for Chinese Spelling Correction
Computation and Language
Fixes spelling errors in special texts.
Learning from Self Critique and Refinement for Faithful LLM Summarization
Computation and Language
Teaches AI to write summaries without making things up.
SGIC: A Self-Guided Iterative Calibration Framework for RAG
Computation and Language
Makes AI smarter by checking its own answers.