Evaluating LLMs for Police Decision-Making: A Framework Based on Police Action Scenarios
By: Sangyub Lee, Heedou Kim, Hyeoncheol Kim
Potential Business Impact:
Tests AI for police to prevent wrong arrests.
The use of Large Language Models (LLMs) in police operations is growing, yet an evaluation framework tailored to police operations remains absent. While LLM's responses may not always be legally incorrect, their unverified use still can lead to severe issues such as unlawful arrests and improper evidence collection. To address this, we propose PAS (Police Action Scenarios), a systematic framework covering the entire evaluation process. Applying this framework, we constructed a novel QA dataset from over 8,000 official documents and established key metrics validated through statistical analysis with police expert judgements. Experimental results show that commercial LLMs struggle with our new police-related tasks, particularly in providing fact-based recommendations. This study highlights the necessity of an expandable evaluation framework to ensure reliable AI-driven police operations. We release our data and prompt template.
Similar Papers
What Would an LLM Do? Evaluating Policymaking Capabilities of Large Language Models
Artificial Intelligence
Helps computers suggest better plans to help homeless people.
The simulation of judgment in LLMs
Computation and Language
AI models might trust fake news more.
Large Language Models Meet Legal Artificial Intelligence: A Survey
Computation and Language
Helps lawyers use smart computers for legal work.