Avoiding Over-Personalization with Rule-Guided Knowledge Graph Adaptation for LLM Recommendations
By: Fernando Spadea, Oshani Seneviratne
Potential Business Impact:
Shows you more interesting things online.
We present a lightweight neuro-symbolic framework to mitigate over-personalization in LLM-based recommender systems by adapting user-side Knowledge Graphs (KGs) at inference time. Instead of retraining models or relying on opaque heuristics, our method restructures a user's Personalized Knowledge Graph (PKG) to suppress feature co-occurrence patterns that reinforce Personalized Information Environments (PIEs), i.e., algorithmically induced filter bubbles that constrain content diversity. These adapted PKGs are used to construct structured prompts that steer the language model toward more diverse, Out-PIE recommendations while preserving topical relevance. We introduce a family of symbolic adaptation strategies, including soft reweighting, hard inversion, and targeted removal of biased triples, and a client-side learning algorithm that optimizes their application per user. Experiments on a recipe recommendation benchmark show that personalized PKG adaptations significantly increase content novelty while maintaining recommendation quality, outperforming global adaptation and naive prompt-based methods.
Similar Papers
Personalizing Large Language Models using Retrieval Augmented Generation and Knowledge Graph
Computation and Language
Helps chatbots give better answers using your personal info.
Ask Safely: Privacy-Aware LLM Query Generation for Knowledge Graphs
Information Retrieval
Keeps private data safe when asking computers questions.
From Symbolic to Neural and Back: Exploring Knowledge Graph-Large Language Model Synergies
Computation and Language
Makes computers smarter by connecting facts.