Using External knowledge to Enhanced PLM for Semantic Matching
By: Min Li, Chun Yuan
Potential Business Impact:
Helps computers understand word meanings better.
Modeling semantic relevance has always been a challenging and critical task in natural language processing. In recent years, with the emergence of massive amounts of annotated data, it has become feasible to train complex models, such as neural network-based reasoning models. These models have shown excellent performance in practical applications and have achieved the current state-ofthe-art performance. However, even with such large-scale annotated data, we still need to think: Can machines learn all the knowledge necessary to perform semantic relevance detection tasks based on this data alone? If not, how can neural network-based models incorporate external knowledge into themselves, and how can relevance detection models be constructed to make full use of external knowledge? In this paper, we use external knowledge to enhance the pre-trained semantic relevance discrimination model. Experimental results on 10 public datasets show that our method achieves consistent improvements in performance compared to the baseline model.
Similar Papers
Knowledge-augmented Pre-trained Language Models for Biomedical Relation Extraction
Computation and Language
Helps computers find connections in science papers.
Semantic Mastery: Enhancing LLMs with Advanced Natural Language Understanding
Computation and Language
Makes AI understand and talk like people.
Enhancing LLM Knowledge Learning through Generalization
Computation and Language
Helps computers remember new facts without forgetting old ones.