Score: 0

A note on the impossibility of conditional PAC-efficient reasoning in large language models

Published: November 25, 2025 | arXiv ID: 2512.03057v1

By: Hao Zeng

Potential Business Impact:

Makes AI models unable to learn some complex tasks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We prove an impossibility result for conditional Probably Approximately Correct (PAC)-efficient reasoning in large language models. While recent work has established marginal PAC efficiency guarantees for composite models that switch between expensive expert models and cheaper fast models, we show that conditional (pointwise) guarantees are impossible in the distribution-free setting. Specifically, for non-atomic input spaces, any algorithm achieving conditional PAC efficiency must be trivial in the sense that it defers to the expert model with probability at least $1-α$ for almost every input.

Page Count
6 pages

Category
Statistics:
Machine Learning (Stat)