CogDoc: Towards Unified thinking in Documents
By: Qixin Xu , Haozhe Wang , Che Liu and more
Potential Business Impact:
Helps computers understand long, detailed papers better.
Current document reasoning paradigms are constrained by a fundamental trade-off between scalability (processing long-context documents) and fidelity (capturing fine-grained, multimodal details). To bridge this gap, we propose CogDoc, a unified coarse-to-fine thinking framework that mimics human cognitive processes: a low-resolution "Fast Reading" phase for scalable information localization,followed by a high-resolution "Focused Thinking" phase for deep reasoning. We conduct a rigorous investigation into post-training strategies for the unified thinking framework, demonstrating that a Direct Reinforcement Learning (RL) approach outperforms RL with Supervised Fine-Tuning (SFT) initialization. Specifically, we find that direct RL avoids the "policy conflict" observed in SFT. Empirically, our 7B model achieves state-of-the-art performance within its parameter class, notably surpassing significantly larger proprietary models (e.g., GPT-4o) on challenging, visually rich document benchmarks.
Similar Papers
Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information
Computation and Language
Makes AI think faster with less information.
Guiding the Inner Eye: A Framework for Hierarchical and Flexible Visual Grounded Reasoning
CV and Pattern Recognition
Helps AI "see" and "think" about pictures better.
DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding
CV and Pattern Recognition
Makes AI explain how it understands documents.