Factors That Support Grounded Responses in LLM Conversations: A Rapid Review
By: Gabriele Cesar Iwashima , Claudia Susie Rodrigues , Claudio Dipolitto and more
Potential Business Impact:
Makes AI talk smarter and more truthfully.
Large language models (LLMs) may generate outputs that are misaligned with user intent, lack contextual grounding, or exhibit hallucinations during conversation, which compromises the reliability of LLM-based applications. This review aimed to identify and analyze techniques that align LLM responses with conversational goals, ensure grounding, and reduce hallucination and topic drift. We conducted a Rapid Review guided by the PRISMA framework and the PICO strategy to structure the search, filtering, and selection processes. The alignment strategies identified were categorized according to the LLM lifecycle phase in which they operate: inference-time, post-training, and reinforcement learning-based methods. Among these, inference-time approaches emerged as particularly efficient, aligning outputs without retraining while supporting user intent, contextual grounding, and hallucination mitigation. The reviewed techniques provided structured mechanisms for improving the quality and reliability of LLM responses across key alignment objectives.
Similar Papers
Lessons from Training Grounded LLMs with Verifiable Rewards
Computation and Language
Makes AI answers more truthful and proven.
Hallucination Detection and Mitigation in Large Language Models
Artificial Intelligence
Makes AI tell the truth, not make things up.
InteGround: On the Evaluation of Verification and Retrieval Planning in Integrative Grounding
Computation and Language
Helps computers combine facts to answer questions.