Score: 0

InFerActive: Towards Scalable Human Evaluation of Large Language Models through Interactive Inference

Published: December 11, 2025 | arXiv ID: 2512.10234v1

By: Junhyeong Hwangbo , Soohyun Lee , Minsoo Cheong and more

Potential Business Impact:

Helps people check AI writing faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Human evaluation remains the gold standard for evaluating outputs of Large Language Models (LLMs). The current evaluation paradigm reviews numerous individual responses, leading to significant scalability challenges. LLM outputs can be more efficiently represented as a tree structure, reflecting their autoregressive generation process and stochastic token selection. However, conventional tree visualization cannot scale to the exponentially large trees generated by modern sampling methods of LLMs. To address this problem, we present InFerActive, an interactive inference system for scalable human evaluation. InFerActive enables on-demand exploration through probability-based filtering and evaluation features, while bridging the semantic gap between computational tokens and human-readable text through adaptive visualization techniques. Through a technical evaluation and user study (N=12), we demonstrate that InFerActive significantly improves evaluation efficiency and enables more comprehensive assessment of model behavior. We further conduct expert case studies that demonstrate InFerActive's practical applicability and potential for transforming LLM evaluation workflows.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
17 pages

Category
Computer Science:
Human-Computer Interaction