Score: 1

STELLAR: A Search-Based Testing Framework for Large Language Model Applications

Published: January 1, 2026 | arXiv ID: 2601.00497v1

By: Lev Sorokin , Ivan Vasilev , Ken E. Friedl and more

BigTech Affiliations: BMW

Potential Business Impact:

Finds mistakes in AI answers before they happen.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Model (LLM)-based applications are increasingly deployed across various domains, including customer service, education, and mobility. However, these systems are prone to inaccurate, fictitious, or harmful responses, and their vast, high-dimensional input space makes systematic testing particularly challenging. To address this, we present STELLAR, an automated search-based testing framework for LLM-based applications that systematically uncovers text inputs leading to inappropriate system responses. Our framework models test generation as an optimization problem and discretizes the input space into stylistic, content-related, and perturbation features. Unlike prior work that focuses on prompt optimization or coverage heuristics, our work employs evolutionary optimization to dynamically explore feature combinations that are more likely to expose failures. We evaluate STELLAR on three LLM-based conversational question-answering systems. The first focuses on safety, benchmarking both public and proprietary LLMs against malicious or unsafe prompts. The second and third target navigation, using an open-source and an industrial retrieval-augmented system for in-vehicle venue recommendations. Overall, STELLAR exposes up to 4.3 times (average 2.5 times) more failures than the existing baseline approaches.

Country of Origin
🇩🇪 Germany

Page Count
12 pages

Category
Computer Science:
Software Engineering