Knowledge-Aware Self-Correction in Language Models via Structured Memory Graphs
By: Swayamjit Saha
Potential Business Impact:
Fixes AI mistakes by checking facts.
Large Language Models (LLMs) are powerful yet prone to generating factual errors, commonly referred to as hallucinations. We present a lightweight, interpretable framework for knowledge-aware self-correction of LLM outputs using structured memory graphs based on RDF triples. Without retraining or fine-tuning, our method post-processes model outputs and corrects factual inconsistencies via external semantic memory. We demonstrate the approach using DistilGPT-2 and show promising results on simple factual prompts.
Similar Papers
Enhancing Large Language Models with Reliable Knowledge Graphs
Computation and Language
Makes AI smarter and more truthful.
Aligning Knowledge Graphs and Language Models for Factual Accuracy
Computation and Language
Makes AI tell the truth, not make things up.
Knowledge Graphs for Enhancing Large Language Models in Entity Disambiguation
Machine Learning (CS)
Helps computers understand facts better, avoiding mistakes.