A Survey on Large Language Models in Multimodal Recommender Systems
By: Alejo Lopez-Avila, Jinhua Du
Potential Business Impact:
Helps computers suggest movies and products better.
Multimodal recommender systems (MRS) integrate heterogeneous user and item data, such as text, images, and structured information, to enhance recommendation performance. The emergence of large language models (LLMs) introduces new opportunities for MRS by enabling semantic reasoning, in-context learning, and dynamic input handling. Compared to earlier pre-trained language models (PLMs), LLMs offer greater flexibility and generalisation capabilities but also introduce challenges related to scalability and model accessibility. This survey presents a comprehensive review of recent work at the intersection of LLMs and MRS, focusing on prompting strategies, fine-tuning methods, and data adaptation techniques. We propose a novel taxonomy to characterise integration patterns, identify transferable techniques from related recommendation domains, provide an overview of evaluation metrics and datasets, and point to possible future directions. We aim to clarify the emerging role of LLMs in multimodal recommendation and support future research in this rapidly evolving field.
Similar Papers
Music Recommendation with Large Language Models: Challenges, Opportunities, and Evaluation
Information Retrieval
Helps music apps pick songs you'll love.
A Comprehensive Review on Harnessing Large Language Models to Overcome Recommender System Challenges
Information Retrieval
Makes online suggestions smarter and more personal.
Large Language Models for Multi-Robot Systems: A Survey
Robotics
Lets robots work together better using smart language.