Static Word Embeddings for Sentence Semantic Representation
By: Takashi Wada , Yuki Hirakawa , Ryotaro Shimizu and more
Potential Business Impact:
Makes computers understand sentences better.
We propose new static word embeddings optimised for sentence semantic representation. We first extract word embeddings from a pre-trained Sentence Transformer, and improve them with sentence-level principal component analysis, followed by either knowledge distillation or contrastive learning. During inference, we represent sentences by simply averaging word embeddings, which requires little computational cost. We evaluate models on both monolingual and cross-lingual tasks and show that our model substantially outperforms existing static models on sentence semantic tasks, and even rivals a basic Sentence Transformer model (SimCSE) on some data sets. Lastly, we perform a variety of analyses and show that our method successfully removes word embedding components that are irrelevant to sentence semantics, and adjusts the vector norms based on the influence of words on sentence semantics.
Similar Papers
A Comparative Analysis of Static Word Embeddings for Hungarian
Computation and Language
Helps computers understand Hungarian words better.
On Self-improving Token Embeddings
Computation and Language
Helps computers understand words better, even new ones.
Sentence Embeddings as an intermediate target in end-to-end summarisation
Computation and Language
Summarizes long reviews better by picking key sentences.