Sim4IA-Bench: A User Simulation Benchmark Suite for Next Query and Utterance Prediction
By: Andreas Konstantin Kruff , Christin Katharina Kreutz , Timo Breuer and more
Potential Business Impact:
Tests if computer searches act like real people.
Validating user simulation is a difficult task due to the lack of established measures and benchmarks, which makes it challenging to assess whether a simulator accurately reflects real user behavior. As part of the Sim4IA Micro-Shared Task at the Sim4IA Workshop, SIGIR 2025, we present Sim4IA-Bench, a simulation benchmark suit for the prediction of the next queries and utterances, the first of its kind in the IR community. Our dataset as part of the suite comprises 160 real-world search sessions from the CORE search engine. For 70 of these sessions, up to 62 simulator runs are available, divided into Task A and Task B, in which different approaches predicted users next search queries or utterances. Sim4IA-Bench provides a basis for evaluating and comparing user simulation approaches and for developing new measures of simulator validity. Although modest in size, the suite represents the first publicly available benchmark that links real search sessions with simulated next-query predictions. In addition to serving as a testbed for next query prediction, it also enables exploratory studies on query reformulation behavior, intent drift, and interaction-aware retrieval evaluation. We also introduce a new measure for evaluating next-query predictions in this task. By making the suite publicly available, we aim to promote reproducible research and stimulate further work on realistic and explainable user simulation for information access: https://github.com/irgroup/Sim4IA-Bench.
Similar Papers
Second SIGIR Workshop on Simulations for Information Access (Sim4IA 2025)
Information Retrieval
Lets computers test search engines without people.
SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors
Computation and Language
Tests if AI acts like real people.
SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors
Computation and Language
Tests how well AI imitates people's actions.