BALI: Enhancing Biomedical Language Representations through Knowledge Graph and Language Model Alignment
By: Andrey Sakhovskiy, Elena Tutubalina
Potential Business Impact:
Helps computers understand medical words better.
In recent years, there has been substantial progress in using pretrained Language Models (LMs) on a range of tasks aimed at improving the understanding of biomedical texts. Nonetheless, existing biomedical LLMs show limited comprehension of complex, domain-specific concept structures and the factual information encoded in biomedical Knowledge Graphs (KGs). In this work, we propose BALI (Biomedical Knowledge Graph and Language Model Alignment), a novel joint LM and KG pre-training method that augments an LM with external knowledge by the simultaneous learning of a dedicated KG encoder and aligning the representations of both the LM and the graph. For a given textual sequence, we link biomedical concept mentions to the Unified Medical Language System (UMLS) KG and utilize local KG subgraphs as cross-modal positive samples for these mentions. Our empirical findings indicate that implementing our method on several leading biomedical LMs, such as PubMedBERT and BioLinkBERT, improves their performance on a range of language understanding tasks and the quality of entity representations, even with minimal pre-training on a small alignment dataset sourced from PubMed scientific abstracts.
Similar Papers
KG-BiLM: Knowledge Graph Embedding via Bidirectional Language Models
Computation and Language
Connects facts and words for smarter AI.
From Knowledge to Treatment: Large Language Model Assisted Biomedical Concept Representation for Drug Repurposing
Computation and Language
Finds new uses for old medicines faster.
Automated Construction of Medical Indicator Knowledge Graphs Using Retrieval Augmented Large Language Models
Artificial Intelligence
Builds smart doctor tools from medical texts.