Towards Trustworthy Multi-Turn LLM Agents via Behavioral Guidance
By: Gonca Gürsun
Large Language Models demonstrate strong reasoning and generation abilities, yet their behavior in multi-turn tasks often lacks reliability and verifiability. We present a task completion framework that enables LLM-based agents to act under explicit behavioral guidance in environments described by reinforcement learning formalisms with defined observation, action, and reward signals. The framework integrates three components: a lightweight task profiler that selects reasoning and generation strategies, a reasoning module that learns verifiable observation - action mappings, and a generation module that enforces constraint-compliant outputs through validation or deterministic synthesis. We show that as the agent interacts with the environment, these components co-evolve, yielding trustworthy behavior.
Similar Papers
Plan Verification for LLM-Based Embodied Task Completion Agents
Artificial Intelligence
Makes robots learn better by fixing their mistakes.
Multi-Agent Systems for Robotic Autonomy with LLMs
Robotics
Builds robots that can do jobs by themselves.
Towards Ethical Multi-Agent Systems of Large Language Models: A Mechanistic Interpretability Perspective
Artificial Intelligence
Makes AI agents act good together.