Joint Enhancement of Relational Reasoning for Long-Context LLMs
By: Zhirui Chen , Wei Shen , Jiashui Huang and more
Potential Business Impact:
Helps computers understand long stories better.
Despite significant progress, large language models (LLMs) still struggle with long contexts due to memory limitations and their inability to tackle complex and long-context tasks. Additionally, LLMs often suffer from a lack of transparency and are prone to producing hallucinations. To address these challenges, we propose \textbf{JERR}, a novel framework designed to enhance long-context comprehension via graph-based reasoning in LLMs. JERR integrates three key components: synopsis extraction, graph construction, and relational reasoning. First, synopsis is extracted by chunking text strategically, allowing the model to summarize and understand information more efficiently. Second, we build a directed acyclic graph (DAG) to resolve redundancy, ensuring logical consistency and clarity. Finally, we incorporate Monte Carlo Tree Search (MCTS) to help the model navigate complex reasoning paths, ensuring more accurate and interpretable outputs. This framework provides a novel solution that enables LLMs to handle extended contexts and complex reasoning tasks with improved reliability and transparency. Experimental results show that JERR consistently outperforms all baselines on the ROUGE and F1 metrics, achieving the highest scores on the LLM-Rater evaluation.
Similar Papers
Evaluating Long-Context Reasoning in LLM-Based WebAgents
Machine Learning (CS)
Helps AI remember long conversations to do tasks.
Large Language Models with Temporal Reasoning for Longitudinal Clinical Summarization and Prediction
Computation and Language
Helps doctors quickly understand patient history.
Can LLMs reason over extended multilingual contexts? Towards long-context evaluation beyond retrieval and haystacks
Computation and Language
Tests if computers can understand long stories.