Score: 1

Offscript: Automated Auditing of Instruction Adherence in LLMs

Published: December 11, 2025 | arXiv ID: 2512.10172v1

By: Nicholas Clark, Ryan Bai, Tanu Mitra

BigTech Affiliations: University of Washington

Potential Business Impact:

Checks if AI follows your instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) and generative search systems are increasingly used for information seeking by diverse populations with varying preferences for knowledge sourcing and presentation. While users can customize LLM behavior through custom instructions and behavioral prompts, no mechanism exists to evaluate whether these instructions are being followed effectively. We present Offscript, an automated auditing tool that efficiently identifies potential instruction following failures in LLMs. In a pilot study analyzing custom instructions sourced from Reddit, Offscript detected potential deviations from instructed behavior in 86.4% of conversations, 22.2% of which were confirmed as material violations through human review. Our findings suggest that automated auditing serves as a viable approach for evaluating compliance to behavioral instructions related to information seeking.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Human-Computer Interaction