LLM4ES: Learning User Embeddings from Event Sequences via Large Language Models
By: Aleksei Shestov , Omar Zoloev , Maksim Makarenko and more
Potential Business Impact:
Helps computers understand people by their actions.
This paper presents LLM4ES, a novel framework that exploits large pre-trained language models (LLMs) to derive user embeddings from event sequences. Event sequences are transformed into a textual representation, which is subsequently used to fine-tune an LLM through next-token prediction to generate high-quality embeddings. We introduce a text enrichment technique that enhances LLM adaptation to event sequence data, improving representation quality for low-variability domains. Experimental results demonstrate that LLM4ES achieves state-of-the-art performance in user classification tasks in financial and other domains, outperforming existing embedding methods. The resulting user embeddings can be incorporated into a wide range of applications, from user segmentation in finance to patient outcome prediction in healthcare.
Similar Papers
LATTE: Learning Aligned Transactions and Textual Embeddings for Bank Clients
Computation and Language
Makes computers understand customer history faster.
Training LLMs to be Better Text Embedders through Bidirectional Reconstruction
Computation and Language
Makes computers understand text meaning better.
Training LLMs to be Better Text Embedders through Bidirectional Reconstruction
Computation and Language
Makes computers understand text meaning better.