CTRL-Rec: Controlling Recommender Systems With Natural Language
By: Micah Carroll , Adeline Foote , Kevin Feng and more
Potential Business Impact:
Lets you tell apps what you want to see.
When users are dissatisfied with recommendations from a recommender system, they often lack fine-grained controls for changing them. Large language models (LLMs) offer a solution by allowing users to guide their recommendations through natural language requests (e.g., "I want to see respectful posts with a different perspective than mine"). We propose a method, CTRL-Rec, that allows for natural language control of traditional recommender systems in real-time with computational efficiency. Specifically, at training time, we use an LLM to simulate whether users would approve of items based on their language requests, and we train embedding models that approximate such simulated judgments. We then integrate these user-request-based predictions into the standard weighting of signals that traditional recommender systems optimize. At deployment time, we require only a single LLM embedding computation per user request, allowing for real-time control of recommendations. In experiments with the MovieLens dataset, our method consistently allows for fine-grained control across a diversity of requests. In a study with 19 Letterboxd users, we find that CTRL-Rec was positively received by users and significantly enhanced users' sense of control and satisfaction with recommendations compared to traditional controls.
Similar Papers
Token-Controlled Re-ranking for Sequential Recommendation via LLMs
Information Retrieval
Lets you tell computers exactly what you want.
Teaching Language Models to Critique via Reinforcement Learning
Machine Learning (CS)
Teaches computers to fix their own code mistakes.
RecMind: LLM-Enhanced Graph Neural Networks for Personalized Consumer Recommendations
Machine Learning (CS)
Suggests better things you might like.