Score: 0

Factors That Support Grounded Responses in LLM Conversations: A Rapid Review

Published: November 24, 2025 | arXiv ID: 2511.21762v1

By: Gabriele Cesar Iwashima , Claudia Susie Rodrigues , Claudio Dipolitto and more

Potential Business Impact:

Makes AI talk smarter and more truthfully.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) may generate outputs that are misaligned with user intent, lack contextual grounding, or exhibit hallucinations during conversation, which compromises the reliability of LLM-based applications. This review aimed to identify and analyze techniques that align LLM responses with conversational goals, ensure grounding, and reduce hallucination and topic drift. We conducted a Rapid Review guided by the PRISMA framework and the PICO strategy to structure the search, filtering, and selection processes. The alignment strategies identified were categorized according to the LLM lifecycle phase in which they operate: inference-time, post-training, and reinforcement learning-based methods. Among these, inference-time approaches emerged as particularly efficient, aligning outputs without retraining while supporting user intent, contextual grounding, and hallucination mitigation. The reviewed techniques provided structured mechanisms for improving the quality and reliability of LLM responses across key alignment objectives.

Page Count
28 pages

Category
Computer Science:
Computation and Language