ZPD Detector: Data Selection via Capability-Difficulty Alignment for Large Language Models
By: Bo Yang , Yunkui Chen , Lanfei Feng and more
Potential Business Impact:
Teaches computers faster with smarter data choices.
As the cost of training large language models continues to increase and high-quality training data become increasingly scarce, selecting high-value samples or synthesizing effective training data under limited data budgets has emerged as a critical research problem. Most existing data selection methods rely on static criteria, such as difficulty, uncertainty, or heuristics, and fail to model the evolving relationship between the model and the data. Inspired by the educational theory of the Zone of Proximal Development (ZPD), we propose ZPD Detector, a data selection framework that adopts a bidirectional perspective between models and data by explicitly modeling the alignment between sample difficulty and the model's current capability. ZPD Detector integrates difficulty calibration, model capability estimation based on Item Response Theory (IRT), and a capability-difficulty matching score to dynamically identify the most informative samples at each learning stage, improving data utilization efficiency; moreover, this dynamic matching strategy provides new insights into training strategy design. All code and data will be released after our work be accepted to support reproducible researc
Similar Papers
AgentFrontier: Expanding the Capability Frontier of LLM Agents with ZPD-Guided Data Synthesis
Computation and Language
Teaches AI to solve harder problems with help.
Investigating the Zone of Proximal Development of Language Models for In-Context Learning
Computation and Language
Helps AI learn better by knowing what it needs.
ZPD-SCA: Unveiling the Blind Spots of LLMs in Assessing Students' Cognitive Abilities
Computation and Language
Helps computers judge if books are right for kids.