Atom-Searcher: Enhancing Agentic Deep Research via Fine-Grained Atomic Thought Reward
By: Yong Deng , Guoqing Wang , Zhenzhe Ying and more
Potential Business Impact:
Helps computers learn better by thinking step-by-step.
Large language models (LLMs) exhibit remarkable problem-solving abilities, but struggle with complex tasks due to static internal knowledge. Retrieval-Augmented Generation (RAG) enhances access to external information, yet remains limited in multi-hop reasoning and strategic search due to rigid workflows. Recent advancements in agentic deep research empower LLMs to autonomously reason, search, and synthesize information. However, current approaches relying on outcome-based reinforcement learning (RL) face critical issues such as conflicting gradients and reward sparsity, limiting performance gains and training efficiency. To address these, we first propose Atomic Thought, a novel LLM thinking paradigm that decomposes reasoning into fine-grained functional units. These units are supervised by Reasoning Reward Models (RRMs), which provide Atomic Thought Rewards (ATR) for fine-grained guidance. Building on this, we propose Atom-Searcher, a novel RL framework for agentic deep research that integrates Atomic Thought and ATR. Atom-Searcher uses a curriculum-inspired reward schedule, prioritizing process-level ATR early and transitioning to outcome rewards, accelerating convergence on effective reasoning paths. Experiments on seven benchmarks show consistent improvements over the state-of-the-art. Key advantages include: (1) Atom-Searcher scales computation at test-time. (2) Atomic Thought provides supervision anchors for RRMs, bridging deep research tasks and RRMs. (3) Atom-Searcher exhibits more interpretable, human-like reasoning patterns.
Similar Papers
Atom-Searcher: Enhancing Agentic Deep Research via Fine-Grained Atomic Thought Reward
Computation and Language
Helps AI learn complex problems by thinking step-by-step.
Atom-Searcher: Enhancing Agentic Deep Research via Fine-Grained Atomic Thought Reward
Computation and Language
Teaches computers to think step-by-step to solve problems.
From Chaos to Order: The Atomic Reasoner Framework for Fine-grained Reasoning in Large Language Models
Computation and Language
Helps computers think through problems step-by-step.