Offscript: Automated Auditing of Instruction Adherence in LLMs
By: Nicholas Clark, Ryan Bai, Tanu Mitra
Potential Business Impact:
Checks if AI follows your instructions.
Large Language Models (LLMs) and generative search systems are increasingly used for information seeking by diverse populations with varying preferences for knowledge sourcing and presentation. While users can customize LLM behavior through custom instructions and behavioral prompts, no mechanism exists to evaluate whether these instructions are being followed effectively. We present Offscript, an automated auditing tool that efficiently identifies potential instruction following failures in LLMs. In a pilot study analyzing custom instructions sourced from Reddit, Offscript detected potential deviations from instructed behavior in 86.4% of conversations, 22.2% of which were confirmed as material violations through human review. Our findings suggest that automated auditing serves as a viable approach for evaluating compliance to behavioral instructions related to information seeking.
Similar Papers
A Survey of LLM-Based Applications in Programming Education: Balancing Automation and Human Oversight
Computers and Society
Helps students learn coding with smart computer tutors.
Agent-based Automated Claim Matching with Instruction-following LLMs
Computation and Language
Helps computers match insurance claims faster and cheaper.
Listening with Language Models: Using LLMs to Collect and Interpret Classroom Feedback
Computers and Society
AI chatbot helps teachers get better student feedback.