Text Embeddings Should Capture Implicit Semantics, Not Just Surface Meaning
By: Yiqun Sun , Qiang Huang , Anthony K. H. Tung and more
Potential Business Impact:
Teaches computers to understand hidden meanings in words.
This position paper argues that the text embedding research community should move beyond surface meaning and embrace implicit semantics as a central modeling goal. Text embedding models have become foundational in modern NLP, powering a wide range of applications and drawing increasing research attention. Yet, much of this progress remains narrowly focused on surface-level semantics. In contrast, linguistic theory emphasizes that meaning is often implicit, shaped by pragmatics, speaker intent, and sociocultural context. Current embedding models are typically trained on data that lacks such depth and evaluated on benchmarks that reward the capture of surface meaning. As a result, they struggle with tasks requiring interpretive reasoning, speaker stance, or social meaning. Our pilot study highlights this gap, showing that even state-of-the-art models perform only marginally better than simplistic baselines on implicit semantics tasks. To address this, we call for a paradigm shift: embedding research should prioritize more diverse and linguistically grounded training data, design benchmarks that evaluate deeper semantic understanding, and explicitly frame implicit meaning as a core modeling objective, better aligning embeddings with real-world language complexity.
Similar Papers
Enhancing Recommender Systems Using Textual Embeddings from Pre-trained Language Models
Information Retrieval
Makes movie suggestions understand what you like.
Prediction is not Explanation: Revisiting the Explanatory Capacity of Mapping Embeddings
Computation and Language
AI can't reliably show what it learned.
One Word Is Not Enough: Simple Prompts Improve Word Embeddings
Computation and Language
Makes computers understand single words better.