Dynamic Long Short-Term Memory Based Memory Storage For Long Horizon LLM Interaction
By: Yuyang Lou, Charles Li
Potential Business Impact:
Helps computers remember what you like.
Memory storage for Large Language models (LLMs) is becoming an increasingly active area of research, particularly for enabling personalization across long conversations. We propose Pref-LSTM, a dynamic and lightweight framework that combines a BERT-based classifier with a LSTM memory module that generates memory embedding which then is soft-prompt injected into a frozen LLM. We synthetically curate a dataset of preference and non-preference conversation turns to train our BERT-based classifier. Although our LSTM-based memory encoder did not yield strong results, we find that the BERT-based classifier performs reliably in identifying explicit and implicit user preferences. Our research demonstrates the viability of using preference filtering with LSTM gating principals as an efficient path towards scalable user preference modeling, without extensive overhead and fine-tuning.
Similar Papers
Towards Explainable Temporal User Profiling with LLMs
Information Retrieval
Explains why you get certain movie suggestions.
Temporal User Profiling with LLMs: Balancing Short-Term and Long-Term Preferences for Recommendations
Information Retrieval
Recommends videos you'll actually like.
Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in Conversation
Computation and Language
Lets computers understand feelings in long talks.