Stress-testing Machine Generated Text Detection: Shifting Language Models Writing Style to Fool Detectors
By: Andrea Pedrotti , Michele Papucci , Cristiano Ciaccio and more
Potential Business Impact:
Makes AI-written text harder to spot.
Recent advancements in Generative AI and Large Language Models (LLMs) have enabled the creation of highly realistic synthetic content, raising concerns about the potential for malicious use, such as misinformation and manipulation. Moreover, detecting Machine-Generated Text (MGT) remains challenging due to the lack of robust benchmarks that assess generalization to real-world scenarios. In this work, we present a pipeline to test the resilience of state-of-the-art MGT detectors (e.g., Mage, Radar, LLM-DetectAIve) to linguistically informed adversarial attacks. To challenge the detectors, we fine-tune language models using Direct Preference Optimization (DPO) to shift the MGT style toward human-written text (HWT). This exploits the detectors' reliance on stylistic clues, making new generations more challenging to detect. Additionally, we analyze the linguistic shifts induced by the alignment and which features are used by detectors to detect MGT texts. Our results show that detectors can be easily fooled with relatively few examples, resulting in a significant drop in detection performance. This highlights the importance of improving detection methods and making them robust to unseen in-domain texts.
Similar Papers
TH-Bench: Evaluating Evading Attacks via Humanizing AI Text on Machine-Generated Text Detectors
Cryptography and Security
Helps tell if writing is from a person or computer.
AI Generated Text Detection Using Instruction Fine-tuned Large Language and Transformer-Based Models
Computation and Language
Finds fake writing made by computers.
When Personalization Tricks Detectors: The Feature-Inversion Trap in Machine-Generated Text Detection
Computation and Language
Spots fake writing that sounds like a real person.