Leveraging Decoder Architectures for Learned Sparse Retrieval
By: Jingfen Qiao , Thong Nguyen , Evangelos Kanoulas and more
Potential Business Impact:
Makes computers find information better using different brains.
Learned Sparse Retrieval (LSR) has traditionally focused on small-scale encoder-only transformer architectures. With the advent of large-scale pre-trained language models, their capability to generate sparse representations for retrieval tasks across different transformer-based architectures, including encoder-only, decoder-only, and encoder-decoder models, remains largely unexplored. This study investigates the effectiveness of LSR across these architectures, exploring various sparse representation heads and model scales. Our results highlight the limitations of using large language models to create effective sparse representations in zero-shot settings, identifying challenges such as inappropriate term expansions and reduced performance due to the lack of expansion. We find that the encoder-decoder architecture with multi-tokens decoding approach achieves the best performance among the three backbones. While the decoder-only model performs worse than the encoder-only model, it demonstrates the potential to outperform when scaled to a high number of parameters.
Similar Papers
CSPLADE: Learned Sparse Retrieval with Causal Language Models
Information Retrieval
Finds information faster with smaller computer brains.
Effective Inference-Free Retrieval for Learned Sparse Representations
Information Retrieval
Makes searching for information much faster.
From Hype to Insight: Rethinking Large Language Model Integration in Visual Speech Recognition
Sound
Helps computers understand spoken words from lip movements.