Automated Harmfulness Testing for Code Large Language Models
By: Honghao Tan , Haibo Wang , Diany Pressato and more
Potential Business Impact:
Finds and stops bad code from AI.
Generative AI systems powered by Large Language Models (LLMs) usually use content moderation to prevent harmful content spread. To evaluate the robustness of content moderation, several metamorphic testing techniques have been proposed to test content moderation software. However, these techniques mainly focus on general users (e.g., text and image generation). Meanwhile, a recent study shows that developers consider using harmful keywords when naming software artifacts to be an unethical behavior. Exposure to harmful content in software artifacts can negatively impact the mental health of developers, making content moderation for Code Large Language Models (Code LLMs) essential. We conduct a preliminary study on program transformations that can be misused to introduce harmful content into auto-generated code, identifying 32 such transformations. To address this, we propose CHT, a coverage-guided harmfulness testing framework that generates prompts using diverse transformations and harmful keywords injected into benign programs. CHT evaluates output damage to assess potential risks in LLM-generated explanations and code. Our evaluation of four Code LLMs and GPT-4o-mini reveals that content moderation in LLM-based code generation is easily bypassed. To enhance moderation, we propose a two-phase approach that first detects harmful content before generating output, improving moderation effectiveness by 483.76\%.
Similar Papers
Code Red! On the Harmfulness of Applying Off-the-shelf Large Language Models to Programming Tasks
Software Engineering
Tests if AI coding helpers are safe.
Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation
Computation and Language
Makes AI safer and less likely to say bad things.
Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation of LLM
Computation and Language
Makes AI safer and less likely to say bad things.