Temporal Interest-Driven Multimodal Personalized Content Generation
By: Tian Miao
Potential Business Impact:
Shows you videos and stuff you'll like.
With the dynamic evolution of user interests and the increasing multimodal demands in internet applications, personalized content generation strategies based on static interest preferences struggle to meet practical application requirements. The proposed TIMGen (Temporal Interest-driven Multimodal Generation) model addresses this challenge by modeling the long-term temporal evolution of users' interests and capturing dynamic interest representations with strong temporal dependencies. This model also supports the fusion of multimodal features, such as text, images, video, and audio, and delivers customized content based on multimodal preferences. TIMGen jointly learns temporal dependencies and modal preferences to obtain a unified interest representation, which it then generates to meet users' personalized content needs. TIMGen overcomes the shortcomings of personalized content recommendation methods based on static preferences, enabling flexible and dynamic modeling of users' multimodal interests, better understanding and capturing their interests and preferences. It can be extended to a variety of practical application scenarios, including e-commerce, advertising, online education, and precision medicine, providing insights for future research.
Similar Papers
Multimodal Foundation Model-Driven User Interest Modeling and Behavior Analysis on Short Video Platforms
Information Retrieval
Shows you videos you'll actually like.
GemiRec: Interest Quantization and Generation for Multi-Interest Recommendation
Information Retrieval
Shows you more things you might like.
Dynamic Forgetting and Spatio-Temporal Periodic Interest Modeling for Local-Life Service Recommendation
Information Retrieval
Improves online shopping suggestions by remembering user habits.