Debiasing Large Language Models via Adaptive Causal Prompting with Sketch-of-Thought
By: Bowen Li , Ziqi Xu , Jing Ren and more
Potential Business Impact:
Makes AI think smarter and faster with fewer words.
Despite notable advancements in prompting methods for Large Language Models (LLMs), such as Chain-of-Thought (CoT), existing strategies still suffer from excessive token usage and limited generalisability across diverse reasoning tasks. To address these limitations, we propose an Adaptive Causal Prompting with Sketch-of-Thought (ACPS) framework, which leverages structural causal models to infer the causal effect of a query on its answer and adaptively select an appropriate intervention (i.e., standard front-door and conditional front-door adjustments). This design enables generalisable causal reasoning across heterogeneous tasks without task-specific retraining. By replacing verbose CoT with concise Sketch-of-Thought, ACPS enables efficient reasoning that significantly reduces token usage and inference cost. Extensive experiments on multiple reasoning benchmarks and LLMs demonstrate that ACPS consistently outperforms existing prompting baselines in terms of accuracy, robustness, and computational efficiency.
Similar Papers
Reasoning Beyond Chain-of-Thought: A Latent Computational Mode in Large Language Models
Computation and Language
Makes computers think better without extra instructions.
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
Computation and Language
Makes smart computers think faster, using fewer words.
Layered Chain-of-Thought Prompting for Multi-Agent LLM Systems: A Comprehensive Approach to Explainable Large Language Models
Computation and Language
Makes AI explain its thinking more clearly and correctly.