STELLAR: A Search-Based Testing Framework for Large Language Model Applications
By: Lev Sorokin , Ivan Vasilev , Ken E. Friedl and more
Potential Business Impact:
Finds mistakes in AI answers before they happen.
Large Language Model (LLM)-based applications are increasingly deployed across various domains, including customer service, education, and mobility. However, these systems are prone to inaccurate, fictitious, or harmful responses, and their vast, high-dimensional input space makes systematic testing particularly challenging. To address this, we present STELLAR, an automated search-based testing framework for LLM-based applications that systematically uncovers text inputs leading to inappropriate system responses. Our framework models test generation as an optimization problem and discretizes the input space into stylistic, content-related, and perturbation features. Unlike prior work that focuses on prompt optimization or coverage heuristics, our work employs evolutionary optimization to dynamically explore feature combinations that are more likely to expose failures. We evaluate STELLAR on three LLM-based conversational question-answering systems. The first focuses on safety, benchmarking both public and proprietary LLMs against malicious or unsafe prompts. The second and third target navigation, using an open-source and an industrial retrieval-augmented system for in-vehicle venue recommendations. Overall, STELLAR exposes up to 4.3 times (average 2.5 times) more failures than the existing baseline approaches.
Similar Papers
STELLAR: A Search-Based Testing Framework for Large Language Model Applications
Software Engineering
Finds mistakes in AI answers before they happen.
STELLA: Guiding Large Language Models for Time Series Forecasting with Semantic Abstractions
Artificial Intelligence
Helps computers predict future events more accurately.
STELLAR: Scene Text Editor for Low-Resource Languages and Real-World Data
CV and Pattern Recognition
Changes text in pictures for any language.