Revealing the Numeracy Gap: An Empirical Investigation of Text Embedding Models
By: Ningyuan Deng , Hanyu Duan , Yixuan Tang and more
Potential Business Impact:
Computers now understand numbers in words better.
Text embedding models are widely used in natural language processing applications. However, their capability is often benchmarked on tasks that do not require understanding nuanced numerical information in text. As a result, it remains unclear whether current embedding models can precisely encode numerical content, such as numbers, into embeddings. This question is critical because embedding models are increasingly applied in domains where numbers matter, such as finance and healthcare. For example, Company X's market share grew by 2\% should be interpreted very differently from Company X's market share grew by 20\%, even though both indicate growth in market share. This study aims to examine whether text embedding models can capture such nuances. Using synthetic data in a financial context, we evaluate 13 widely used text embedding models and find that they generally struggle to capture numerical details accurately. Our further analyses provide deeper insights into embedding numeracy, informing future research to strengthen embedding model-based NLP systems with improved capacity for handling numerical content.
Similar Papers
Pre-trained Language Models Learn Remarkably Accurate Representations of Numbers
Computation and Language
Fixes math mistakes in smart computer programs.
Unravelling the Mechanisms of Manipulating Numbers in Language Models
Computation and Language
Finds how computers make math mistakes.
Comparative Evaluation of Embedding Representations for Financial News Sentiment Analysis
Machine Learning (CS)
Helps computers understand money news with little data.