Score: 2

Consensus Sampling for Safer Generative AI

Published: November 12, 2025 | arXiv ID: 2511.09493v1

By: Adam Tauman Kalai, Yael Tauman Kalai, Or Zamir

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Combines AI to make it safer than any single one.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Many approaches to AI safety rely on inspecting model outputs or activations, yet certain risks are inherently undetectable by inspection alone. We propose a complementary, architecture-agnostic approach that enhances safety through the aggregation of multiple generative models, with the aggregated model inheriting its safety from the safest subset of a given size among them. Specifically, we present a consensus sampling algorithm that, given $k$ models and a prompt, achieves risk competitive with the average risk of the safest $s$ of the $k$ models, where $s$ is a chosen parameter, while abstaining when there is insufficient agreement between them. The approach leverages the models' ability to compute output probabilities, and we bound the probability of abstention when sufficiently many models are safe and exhibit adequate agreement. The algorithm is inspired by the provable copyright protection algorithm of Vyas et al. (2023). It requires some overlap among safe models, offers no protection when all models are unsafe, and may accumulate risk over repeated use. Nonetheless, our results provide a new, model-agnostic approach for AI safety by amplifying safety guarantees from an unknown subset of models within a collection to that of a single reliable model.

Country of Origin
🇺🇸 🇮🇱 United States, Israel

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence