Score: 1

Debiasing Large Language Models via Adaptive Causal Prompting with Sketch-of-Thought

Published: January 13, 2026 | arXiv ID: 2601.08108v1

By: Bowen Li , Ziqi Xu , Jing Ren and more

Potential Business Impact:

Makes AI think smarter and faster with fewer words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite notable advancements in prompting methods for Large Language Models (LLMs), such as Chain-of-Thought (CoT), existing strategies still suffer from excessive token usage and limited generalisability across diverse reasoning tasks. To address these limitations, we propose an Adaptive Causal Prompting with Sketch-of-Thought (ACPS) framework, which leverages structural causal models to infer the causal effect of a query on its answer and adaptively select an appropriate intervention (i.e., standard front-door and conditional front-door adjustments). This design enables generalisable causal reasoning across heterogeneous tasks without task-specific retraining. By replacing verbose CoT with concise Sketch-of-Thought, ACPS enables efficient reasoning that significantly reduces token usage and inference cost. Extensive experiments on multiple reasoning benchmarks and LLMs demonstrate that ACPS consistently outperforms existing prompting baselines in terms of accuracy, robustness, and computational efficiency.

Country of Origin
🇦🇺 Australia

Page Count
19 pages

Category
Computer Science:
Computation and Language