MADREC: A Multi-Aspect Driven LLM Agent for Explainable and Adaptive Recommendation
By: Jiin Park, Misuk Kim
Potential Business Impact:
Suggests better movies and explains why.
Recent attempts to integrate large language models (LLMs) into recommender systems have gained momentum, but most remain limited to simple text generation or static prompt-based inference, failing to capture the complexity of user preferences and real-world interactions. This study proposes the Multi-Aspect Driven LLM Agent MADRec, an autonomous LLM-based recommender that constructs user and item profiles by unsupervised extraction of multi-aspect information from reviews and performs direct recommendation, sequential recommendation, and explanation generation. MADRec generates structured profiles via aspect-category-based summarization and applies Re-Ranking to construct high-density inputs. When the ground-truth item is missing from the output, the Self-Feedback mechanism dynamically adjusts the inference criteria. Experiments across multiple domains show that MADRec outperforms traditional and LLM-based baselines in both precision and explainability, with human evaluation further confirming the persuasiveness of the generated explanations.
Similar Papers
AdaRec: Adaptive Recommendation with LLMs via Narrative Profiling and Dual-Channel Reasoning
Computation and Language
Recommends products you'll love with less data.
AdaptRec: A Self-Adaptive Framework for Sequential Recommendations with Large Language Models
Information Retrieval
Helps computers suggest movies you'll like.
MR.Rec: Synergizing Memory and Reasoning for Personalized Recommendation Assistant with LLMs
Information Retrieval
Helps websites guess what you want to buy.