Score: 0

Activations as Features: Probing LLMs for Generalizable Essay Scoring Representations

Published: December 22, 2025 | arXiv ID: 2512.19456v1

By: Jinwei Chi , Ke Wang , Yu Chen and more

Potential Business Impact:

Helps computers grade essays fairly, even with different questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automated essay scoring (AES) is a challenging task in cross-prompt settings due to the diversity of scoring criteria. While previous studies have focused on the output of large language models (LLMs) to improve scoring accuracy, we believe activations from intermediate layers may also provide valuable information. To explore this possibility, we evaluated the discriminative power of LLMs' activations in cross-prompt essay scoring task. Specifically, we used activations to fit probes and further analyzed the effects of different models and input content of LLMs on this discriminative power. By computing the directions of essays across various trait dimensions under different prompts, we analyzed the variation in evaluation perspectives of large language models concerning essay types and traits. Results show that the activations possess strong discriminative power in evaluating essay quality and that LLMs can adapt their evaluation perspectives to different traits and essay types, effectively handling the diversity of scoring criteria in cross-prompt settings.

Country of Origin
🇨🇳 China

Page Count
9 pages

Category
Computer Science:
Computation and Language