Score: 3

Leveraging Decoder Architectures for Learned Sparse Retrieval

Published: April 25, 2025 | arXiv ID: 2504.18151v1

By: Jingfen Qiao , Thong Nguyen , Evangelos Kanoulas and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Makes computers find information better using different brains.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Learned Sparse Retrieval (LSR) has traditionally focused on small-scale encoder-only transformer architectures. With the advent of large-scale pre-trained language models, their capability to generate sparse representations for retrieval tasks across different transformer-based architectures, including encoder-only, decoder-only, and encoder-decoder models, remains largely unexplored. This study investigates the effectiveness of LSR across these architectures, exploring various sparse representation heads and model scales. Our results highlight the limitations of using large language models to create effective sparse representations in zero-shot settings, identifying challenges such as inappropriate term expansions and reduced performance due to the lack of expansion. We find that the encoder-decoder architecture with multi-tokens decoding approach achieves the best performance among the three backbones. While the decoder-only model performs worse than the encoder-only model, it demonstrates the potential to outperform when scaled to a high number of parameters.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡³πŸ‡± Netherlands, United States

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Information Retrieval