KALE: Enhancing Knowledge Manipulation in Large Language Models via Knowledge-aware Learning
By: Qitan Lv , Tianyu Liu , Qiaosheng Zhang and more
Despite the impressive performance of large language models (LLMs) pretrained on vast knowledge corpora, advancing their knowledge manipulation-the ability to effectively recall, reason, and transfer relevant knowledge-remains challenging. Existing methods mainly leverage Supervised Fine-Tuning (SFT) on labeled datasets to enhance LLMs' knowledge manipulation ability. However, we observe that SFT models still exhibit the known&incorrect phenomenon, where they explicitly possess relevant knowledge for a given question but fail to leverage it for correct answers. To address this challenge, we propose KALE (Knowledge-Aware LEarning)-a post-training framework that leverages knowledge graphs (KGs) to generate high-quality rationales and enhance LLMs' knowledge manipulation ability. Specifically, KALE first introduces a Knowledge-Induced (KI) data synthesis method that efficiently extracts multi-hop reasoning paths from KGs to generate high-quality rationales for question-answer pairs. Then, KALE employs a Knowledge-Aware (KA) fine-tuning paradigm that enhances knowledge manipulation by internalizing rationale-guided reasoning through minimizing the KL divergence between predictions with and without rationales. Extensive experiments on eight popular benchmarks across six different LLMs demonstrate the effectiveness of KALE, achieving accuracy improvements of up to 11.72% and an average of 4.18%.
Similar Papers
Enhancing Large Language Models with Reliable Knowledge Graphs
Computation and Language
Makes AI smarter and more truthful.
Large Language Models Meet Knowledge Graphs for Question Answering: Synthesis and Opportunities
Computation and Language
Helps computers answer hard questions better.
Aligning Knowledge Graphs and Language Models for Factual Accuracy
Computation and Language
Makes AI tell the truth, not make things up.