BEDA: Belief Estimation as Probabilistic Constraints for Performing Strategic Dialogue Acts
By: Hengli Li , Zhaoxin Yu , Qi Shen and more
Strategic dialogue requires agents to execute distinct dialogue acts, for which belief estimation is essential. While prior work often estimates beliefs accurately, it lacks a principled mechanism to use those beliefs during generation. We bridge this gap by first formalizing two core acts Adversarial and Alignment, and by operationalizing them via probabilistic constraints on what an agent may generate. We instantiate this idea in BEDA, a framework that consists of the world set, the belief estimator for belief estimation, and the conditional generator that selects acts and realizes utterances consistent with the inferred beliefs. Across three settings, Conditional Keeper Burglar (CKBG, adversarial), Mutual Friends (MF, cooperative), and CaSiNo (negotiation), BEDA consistently outperforms strong baselines: on CKBG it improves success rate by at least 5.0 points across backbones and by 20.6 points with GPT-4.1-nano; on Mutual Friends it achieves an average improvement of 9.3 points; and on CaSiNo it achieves the optimal deal relative to all baselines. These results indicate that casting belief estimation as constraints provides a simple, general mechanism for reliable strategic dialogue.
Similar Papers
The Belief-Desire-Intention Ontology for modelling mental reality and agency
Artificial Intelligence
Helps AI understand thoughts and make better decisions.
Modeling Uncertainty: Constraint-Based Belief States in Imperfect-Information Games
Artificial Intelligence
Helps game players guess hidden pieces better.
The Silent Scholar Problem: A Probabilistic Framework for Breaking Epistemic Asymmetry in LLM Agents
Artificial Intelligence
Agents learn faster by sharing what they don't know.