When Do Tools and Planning Help LLMs Think? A Cost- and Latency-Aware Benchmark
By: Subha Ghoshal, Ali Al-Bustami
Potential Business Impact:
Helps AI answer questions better, but can be slow.
Modern large language models (LLMs) increasingly rely on inference-time planning and external tools to improve reasoning. We benchmark this behavior on two real-world settings: event-centric question answering over graph-structured knowledge (Event-QA) and persuasive response generation in Reddit ChangeMyView (CMV). Using LangChain and LangGraph, we compare a one-shot baseline against a plan-execute-replan agent equipped with task-specific tools (DBpedia SPARQL/lookup/schema exploration, Wikipedia-focused retrieval, and topical web search). We evaluate on 60 examples each from Event-QA and CMV (3 splits of 20), and report both mean end-to-end latency and per-example token cost estimates. We evaluate GPT-4o and GPT-4o-mini under identical workflows and report accuracy and end-to-end latency. On Event-QA, the best tool-augmented configuration improves accuracy (e.g., 47.5\% $\rightarrow$ 67.5\% for GPT-4o) while increasing latency by orders of magnitude ($\sim$8s $\rightarrow$ $\sim$317s per example). On CMV, one-shot prompting is strongest (e.g., GPT-4o-mini achieves 75\% at $\sim$6s), and planning+search increases latency substantially without consistent gains. However, complex multi-tool orchestration exposes failure modes where the smaller model degrades. Overall, the findings highlight the need for task-specific, cost-aware choices of both model size and agent/tooling complexity.
Similar Papers
Idea2Plan: Exploring AI-Powered Research Planning
Computation and Language
Helps computers plan science experiments from ideas.
ML-Tool-Bench: Tool-Augmented Planning for ML Tasks
Machine Learning (CS)
Helps AI agents plan complex data tasks better.
LLMs Can Plan Only If We Tell Them
Computation and Language
Lets computers plan tasks better than people.