Score: 0

Attentive Reasoning Queries: A Systematic Method for Optimizing Instruction-Following in Large Language Models

Published: March 5, 2025 | arXiv ID: 2503.03669v1

By: Bar Karov, Dor Zohar, Yam Marcovitz

Potential Business Impact:

Teaches computers to follow tricky instructions better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present Attentive Reasoning Queries (ARQs), a novel structured reasoning approach that significantly improves instruction-following in Large Language Models through domain-specialized reasoning blueprints. While LLMs demonstrate remarkable capabilities across diverse tasks, they often fail to maintain adherence to complex, use-case-specific instructions during multi-turn conversations, presenting challenges for business-critical applications. ARQs address this limitation by guiding LLMs through systematic reasoning steps with targeted queries that reinstate critical instructions and facilitate intermediate reasoning throughout the completion process. In extensive testing within Parlant, our framework for reliable customer-facing agents in which ARQs were born out of necessity, they achieved a 90.2% success rate across 87 test scenarios, outperforming both Chain-of-Thought reasoning (86.1%) and direct response generation (81.5%). ARQs showed particular strength in addressing persistent failure modes like guideline re-application and hallucination prevention. Our analysis also revealed that ARQs can potentially be more computationally efficient than free-form reasoning when carefully designed. These findings demonstrate that structured reasoning approaches provide effective mechanisms for controlling how LLMs process information and make decisions in complex scenarios.

Page Count
27 pages

Category
Computer Science:
Computation and Language