Feature Fusion Revisited: Multimodal CTR Prediction for MMCTR Challenge
By: Junjie Zhou
Potential Business Impact:
Makes online ads show up faster and better.
With the rapid advancement of Multimodal Large Language Models (MLLMs), an increasing number of researchers are exploring their application in recommendation systems. However, the high latency associated with large models presents a significant challenge for such use cases. The EReL@MIR workshop provided a valuable opportunity to experiment with various approaches aimed at improving the efficiency of multimodal representation learning for information retrieval tasks. As part of the competition's requirements, participants were mandated to submit a technical report detailing their methodologies and findings. Our team was honored to receive the award for Task 2 - Winner (Multimodal CTR Prediction). In this technical report, we present our methods and key findings. Additionally, we propose several directions for future work, particularly focusing on how to effectively integrate recommendation signals into multimodal representations. The codebase for our implementation is publicly available at: https://github.com/Lattice-zjj/MMCTR_Code, and the trained model weights can be accessed at: https://huggingface.co/FireFlyCourageous/MMCTR_DIN_MicroLens_1M_x1.
Similar Papers
1$^{st}$ Place Solution of WWW 2025 EReL@MIR Workshop Multimodal CTR Prediction Challenge
Information Retrieval
Helps websites show you things you'll like.
Quadratic Interest Network for Multimodal Click-Through Rate Prediction
Information Retrieval
Helps websites show you things you'll like.
CTR-Driven Advertising Image Generation with Multimodal Large Language Models
Machine Learning (CS)
Makes ads get more clicks by creating better pictures.