When Models Decide and When They Bind: A Two-Stage Computation for Multiple-Choice Question-Answering
By: Hugh Mee Wong, Rick Nouwen, Albert Gatt
Potential Business Impact:
Helps computers pick the right answer choice.
Multiple-choice question answering (MCQA) is easy to evaluate but adds a meta-task: models must both solve the problem and output the symbol that *represents* the answer, conflating reasoning errors with symbol-binding failures. We study how language models implement MCQA internally using representational analyses (PCA, linear probes) as well as causal interventions. We find that option-boundary (newline) residual states often contain strong linearly decodable signals related to per-option correctness. Winner-identity probing reveals a two-stage progression: the winning *content position* becomes decodable immediately after the final option is processed, while the *output symbol* is represented closer to the answer emission position. Tests under symbol and content permutations support a two-stage mechanism in which models first select a winner in content space and then bind or route that winner to the appropriate symbol to emit.
Similar Papers
Reasoning Models are Test Exploiters: Rethinking Multiple-Choice
Computation and Language
Tests make smart computers seem smarter than they are.
Beyond Multiple Choice: A Hybrid Framework for Unifying Robust Evaluation and Verifiable Reasoning Training
Computation and Language
Makes AI understand questions better, not just guess.
More Bias, Less Bias: BiasPrompting for Enhanced Multiple-Choice Question Answering
Computation and Language
Helps AI better answer tricky questions by thinking.