Score: 0

Safety Game: Balancing Safe and Informative Conversations with Blackbox Agentic AI using LP Solvers

Published: October 10, 2025 | arXiv ID: 2510.09330v1

By: Tuan Nguyen, Long Tran-Thanh

Potential Business Impact:

Makes AI safer without retraining it.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Ensuring that large language models (LLMs) comply with safety requirements is a central challenge in AI deployment. Existing alignment approaches primarily operate during training, such as through fine-tuning or reinforcement learning from human feedback, but these methods are costly and inflexible, requiring retraining whenever new requirements arise. Recent efforts toward inference-time alignment mitigate some of these limitations but still assume access to model internals, which is impractical, and not suitable for third party stakeholders who do not have access to the models. In this work, we propose a model-independent, black-box framework for safety alignment that does not require retraining or access to the underlying LLM architecture. As a proof of concept, we address the problem of trading off between generating safe but uninformative answers versus helpful yet potentially risky ones. We formulate this dilemma as a two-player zero-sum game whose minimax equilibrium captures the optimal balance between safety and helpfulness. LLM agents operationalize this framework by leveraging a linear programming solver at inference time to compute equilibrium strategies. Our results demonstrate the feasibility of black-box safety alignment, offering a scalable and accessible pathway for stakeholders, including smaller organizations and entities in resource-constrained settings, to enforce safety across rapidly evolving LLM ecosystems.

Country of Origin
🇬🇧 United Kingdom

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)