Score: 2

A Mixture of Linear Corrections Generates Secure Code

Published: July 13, 2025 | arXiv ID: 2507.09508v1

By: Weichen Yu , Ravi Mangal , Terry Zhuo and more

Potential Business Impact:

Makes computers write safer code automatically.

Business Areas:
QR Codes Software

Large language models (LLMs) have become proficient at sophisticated code-generation tasks, yet remain ineffective at reliably detecting or avoiding code vulnerabilities. Does this deficiency stem from insufficient learning about code vulnerabilities, or is it merely a result of ineffective prompting? Using representation engineering techniques, we investigate whether LLMs internally encode the concepts necessary to identify code vulnerabilities. We find that current LLMs encode precise internal representations that distinguish vulnerable from secure code--achieving greater accuracy than standard prompting approaches. Leveraging these vulnerability-sensitive representations, we develop an inference-time steering technique that subtly modulates the model's token-generation probabilities through a mixture of corrections (MoC). Our method effectively guides LLMs to produce less vulnerable code without compromising functionality, demonstrating a practical approach to controlled vulnerability management in generated code. Notably, MoC enhances the security ratio of Qwen2.5-Coder-7B by 8.9\%, while simultaneously improving functionality on HumanEval pass@1 by 2.1\%.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Cryptography and Security