A Survey on Self-supervised Contrastive Learning for Multimodal Text-Image Analysis
By: Asifullah Khan , Laiba Asmatullah , Anza Malik and more
Potential Business Impact:
Teaches computers to understand pictures and words together.
Self-supervised learning is a machine learning approach that generates implicit labels by learning underlined patterns and extracting discriminative features from unlabeled data without manual labelling. Contrastive learning introduces the concept of "positive" and "negative" samples, where positive pairs (e.g., variation of the same image/object) are brought together in the embedding space, and negative pairs (e.g., views from different images/objects) are pushed farther away. This methodology has shown significant improvements in image understanding and image text analysis without much reliance on labeled data. In this paper, we comprehensively discuss the terminologies, recent developments and applications of contrastive learning with respect to text-image models. Specifically, we provide an overview of the approaches of contrastive learning in text-image models in recent years. Secondly, we categorize the approaches based on different model structures. Thirdly, we further introduce and discuss the latest advances of the techniques used in the process such as pretext tasks for both images and text, architectural structures, and key trends. Lastly, we discuss the recent state-of-art applications of self-supervised contrastive learning Text-Image based models.
Similar Papers
Self-Supervised Contrastive Learning is Approximately Supervised Contrastive Learning
Machine Learning (CS)
Teaches computers to learn from unlabeled pictures.
Contrastive Self-Supervised Network Intrusion Detection using Augmented Negative Pairs
Machine Learning (CS)
Finds computer attacks better by learning normal.
Multi-Modal Self-Supervised Semantic Communication
CV and Pattern Recognition
Teaches computers to share information more efficiently.