AtlasKV: Augmenting LLMs with Billion-Scale Knowledge Graphs in 20GB VRAM
By: Haoyu Huang , Hong Ting Tsang , Jiaxin Bai and more
Potential Business Impact:
Makes AI remember more facts without slowing down.
Retrieval-augmented generation (RAG) has shown some success in augmenting large language models (LLMs) with external knowledge. However, as a non-parametric knowledge integration paradigm for LLMs, RAG methods heavily rely on external retrieval modules and the retrieved textual context prior. Especially for very large scale knowledge augmentation, they would introduce substantial inference latency due to expensive searches and much longer relevant context. In this paper, we propose a parametric knowledge integration method, called \textbf{AtlasKV}, a scalable, effective, and general way to augment LLMs with billion-scale knowledge graphs (KGs) (e.g. 1B triples) using very little GPU memory cost (e.g. less than 20GB VRAM). In AtlasKV, we introduce KG2KV and HiKVP to integrate KG triples into LLMs at scale with sub-linear time and memory complexity. It maintains strong knowledge grounding and generalization performance using the LLMs' inherent attention mechanism, and requires no external retrievers, long context priors, or retraining when adapting to new knowledge.
Similar Papers
Personalizing Large Language Models using Retrieval Augmented Generation and Knowledge Graph
Computation and Language
Helps chatbots give better answers using your personal info.
Knowledge Graph-extended Retrieval Augmented Generation for Question Answering
Machine Learning (CS)
AI answers questions better by using facts.
GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models
Machine Learning (CS)
Helps AI answer questions by learning from connected facts.