Activations as Features: Probing LLMs for Generalizable Essay Scoring Representations
By: Jinwei Chi , Ke Wang , Yu Chen and more
Potential Business Impact:
Helps computers grade essays fairly, even with different questions.
Automated essay scoring (AES) is a challenging task in cross-prompt settings due to the diversity of scoring criteria. While previous studies have focused on the output of large language models (LLMs) to improve scoring accuracy, we believe activations from intermediate layers may also provide valuable information. To explore this possibility, we evaluated the discriminative power of LLMs' activations in cross-prompt essay scoring task. Specifically, we used activations to fit probes and further analyzed the effects of different models and input content of LLMs on this discriminative power. By computing the directions of essays across various trait dimensions under different prompts, we analyzed the variation in evaluation perspectives of large language models concerning essay types and traits. Results show that the activations possess strong discriminative power in evaluating essay quality and that LLMs can adapt their evaluation perspectives to different traits and essay types, effectively handling the diversity of scoring criteria in cross-prompt settings.
Similar Papers
Activation Oracles: Training and Evaluating LLMs as General-Purpose Activation Explainers
Computation and Language
Lets computers explain their own thinking.
Does the Prompt-based Large Language Model Recognize Students' Demographics and Introduce Bias in Essay Scoring?
Computation and Language
AI writing grader unfairly scores non-native speakers.
Beyond Tokens in Language Models: Interpreting Activations through Text Genre Chunks
Computation and Language
Predicts text style from AI's thoughts.