Personalizing Large Language Models using Retrieval Augmented Generation and Knowledge Graph
By: Deeksha Prahlad , Chanhee Lee , Dongha Kim and more
Potential Business Impact:
Helps chatbots give better answers using your personal info.
The advent of large language models (LLMs) has allowed numerous applications, including the generation of queried responses, to be leveraged in chatbots and other conversational assistants. Being trained on a plethora of data, LLMs often undergo high levels of over-fitting, resulting in the generation of extra and incorrect data, thus causing hallucinations in output generation. One of the root causes of such problems is the lack of timely, factual, and personalized information fed to the LLM. In this paper, we propose an approach to address these problems by introducing retrieval augmented generation (RAG) using knowledge graphs (KGs) to assist the LLM in personalized response generation tailored to the users. KGs have the advantage of storing continuously updated factual information in a structured way. While our KGs can be used for a variety of frequently updated personal data, such as calendar, contact, and location data, we focus on calendar data in this paper. Our experimental results show that our approach works significantly better in understanding personal information and generating accurate responses compared to the baseline LLMs using personal data as text inputs, with a moderate reduction in response time.
Similar Papers
Knowledge Graph Retrieval-Augmented Generation for LLM-based Recommendation
Information Retrieval
Helps online suggestions use better, newer facts.
Knowledge Graph-extended Retrieval Augmented Generation for Question Answering
Machine Learning (CS)
AI answers questions better by using facts.
Guiding Generative Storytelling with Knowledge Graphs
Computation and Language
Lets you change stories by editing a story map.