Are Your LLM-based Text-to-SQL Models Secure? Exploring SQL Injection via Backdoor Attacks
By: Meiyu Lin , Haichuan Zhang , Jiale Lao and more
Potential Business Impact:
Makes computer programs that understand questions unsafe.
Large language models (LLMs) have shown state-of-the-art results in translating natural language questions into SQL queries (Text-to-SQL), a long-standing challenge within the database community. However, security concerns remain largely unexplored, particularly the threat of backdoor attacks, which can introduce malicious behaviors into models through fine-tuning with poisoned datasets. In this work, we systematically investigate the vulnerabilities of LLM-based Text-to-SQL models and present ToxicSQL, a novel backdoor attack framework. Our approach leverages stealthy {semantic and character-level triggers} to make backdoors difficult to detect and remove, ensuring that malicious behaviors remain covert while maintaining high model accuracy on benign inputs. Furthermore, we propose leveraging SQL injection payloads as backdoor targets, enabling the generation of malicious yet executable SQL queries, which pose severe security and privacy risks in language model-based SQL development. We demonstrate that injecting only 0.44% of poisoned data can result in an attack success rate of 79.41%, posing a significant risk to database security. Additionally, we propose detection and mitigation strategies to enhance model reliability. Our findings highlight the urgent need for security-aware Text-to-SQL development, emphasizing the importance of robust defenses against backdoor threats.
Similar Papers
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations
Cryptography and Security
Stops bad guys from tricking movie suggestions.
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Computation and Language
Finds hidden meanings that trick AI.
Backdoor-Powered Prompt Injection Attacks Nullify Defense Methods
Cryptography and Security
Tricks AI into following bad commands.