Score: 3

AI Security Beyond Core Domains: Resume Screening as a Case Study of Adversarial Vulnerabilities in Specialized LLM Applications

Published: December 23, 2025 | arXiv ID: 2512.20164v1

By: Honglin Mu , Jinghao Liu , Kaiyang Wan and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Makes AI safer from hidden trick instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) excel at text comprehension and generation, making them ideal for automated tasks like code review and content moderation. However, our research identifies a vulnerability: LLMs can be manipulated by "adversarial instructions" hidden in input data, such as resumes or code, causing them to deviate from their intended task. Notably, while defenses may exist for mature domains such as code review, they are often absent in other common applications such as resume screening and peer review. This paper introduces a benchmark to assess this vulnerability in resume screening, revealing attack success rates exceeding 80% for certain attack types. We evaluate two defense mechanisms: prompt-based defenses achieve 10.1% attack reduction with 12.5% false rejection increase, while our proposed FIDS (Foreign Instruction Detection through Separation) using LoRA adaptation achieves 15.4% attack reduction with 10.4% false rejection increase. The combined approach provides 26.3% attack reduction, demonstrating that training-time defenses outperform inference-time mitigations in both security and utility preservation.

Country of Origin
πŸ‡¦πŸ‡Ί πŸ‡¨πŸ‡³ πŸ‡¦πŸ‡ͺ πŸ‡ΊπŸ‡Έ United States, United Arab Emirates, China, Australia

Repos / Data Links

Page Count
46 pages

Category
Computer Science:
Computation and Language