InFerActive: Towards Scalable Human Evaluation of Large Language Models through Interactive Inference
By: Junhyeong Hwangbo , Soohyun Lee , Minsoo Cheong and more
Potential Business Impact:
Helps people check AI writing faster.
Human evaluation remains the gold standard for evaluating outputs of Large Language Models (LLMs). The current evaluation paradigm reviews numerous individual responses, leading to significant scalability challenges. LLM outputs can be more efficiently represented as a tree structure, reflecting their autoregressive generation process and stochastic token selection. However, conventional tree visualization cannot scale to the exponentially large trees generated by modern sampling methods of LLMs. To address this problem, we present InFerActive, an interactive inference system for scalable human evaluation. InFerActive enables on-demand exploration through probability-based filtering and evaluation features, while bridging the semantic gap between computational tokens and human-readable text through adaptive visualization techniques. Through a technical evaluation and user study (N=12), we demonstrate that InFerActive significantly improves evaluation efficiency and enables more comprehensive assessment of model behavior. We further conduct expert case studies that demonstrate InFerActive's practical applicability and potential for transforming LLM evaluation workflows.
Similar Papers
Scaling Up Active Testing to Large Language Models
Machine Learning (CS)
Tests big computer brains better with less work.
Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks
Artificial Intelligence
Tests AI code writing with helpful feedback.
Advancing Research via Human-AI Interactive Theorem Proving
Human-Computer Interaction
Helps scientists discover math proofs faster.