Towards Provably Secure Generative AI: Reliable Consensus Sampling
By: Yu Cui , Hang Fu , Sicheng Pan and more
Potential Business Impact:
Makes AI safer and more useful.
Existing research on generative AI security is primarily driven by mutually reinforcing attack and defense methodologies grounded in empirical experience. This dynamic frequently gives rise to previously unknown attacks that can circumvent current detection and prevention. This necessitates the continual updating of security mechanisms. Constructing generative AI with provable security and theoretically controllable risk is therefore necessary. Consensus Sampling (CS) is a promising algorithm toward provably secure AI. It controls risk by leveraging overlap in model output probabilities. However, we find that CS relies on frequent abstention to avoid unsafe outputs, which reduces utility. Moreover, CS becomes highly vulnerable when unsafe models are maliciously manipulated. To address these issues, we propose a new primitive called Reliable Consensus Sampling (RCS), that traces acceptance probability to tolerate extreme adversarial behaviors, improving robustness. RCS also eliminates the need for abstention entirely. We further develop a feedback algorithm to continuously and dynamically enhance the safety of RCS. We provide theoretical guarantees that RCS maintains a controllable risk threshold. Extensive experiments show that RCS significantly improves robustness and utility while maintaining latency comparable to CS. We hope this work contributes to the development of provably secure generative AI.
Similar Papers
Consensus Sampling for Safer Generative AI
Artificial Intelligence
Combines AI to make it safer than any single one.
Efficient Prediction of Pass@k Scaling in Large Language Models
Artificial Intelligence
Predicts AI's rare risks and skills better, cheaper.
Computational Safety for Generative AI: A Signal Processing Perspective
Artificial Intelligence
Makes AI safer by spotting bad inputs and fake outputs.