A note on the impossibility of conditional PAC-efficient reasoning in large language models
By: Hao Zeng
Potential Business Impact:
Makes AI models unable to learn some complex tasks.
We prove an impossibility result for conditional Probably Approximately Correct (PAC)-efficient reasoning in large language models. While recent work has established marginal PAC efficiency guarantees for composite models that switch between expensive expert models and cheaper fast models, we show that conditional (pointwise) guarantees are impossible in the distribution-free setting. Specifically, for non-atomic input spaces, any algorithm achieving conditional PAC efficiency must be trivial in the sense that it defers to the expert model with probability at least $1-α$ for almost every input.
Similar Papers
PAC Reasoning: Controlling the Performance Loss for Efficient Reasoning
Artificial Intelligence
Makes smart computers solve problems faster, with fewer mistakes.
Probably Approximately Correct Causal Discovery
Machine Learning (Stat)
Helps computers find what causes things faster.
The Probably Approximately Correct Learning Model in Computational Learning Theory
Machine Learning (Stat)
Teaches computers to learn patterns from examples.