LLMs-based Augmentation for Domain Adaptation in Long-tailed Food Datasets
By: Qing Wang , Chong-Wah Ngo , Ee-Peng Lim and more
Potential Business Impact:
Lets phones know what food you're eating.
Training a model for food recognition is challenging because the training samples, which are typically crawled from the Internet, are visually different from the pictures captured by users in the free-living environment. In addition to this domain-shift problem, the real-world food datasets tend to be long-tailed distributed and some dishes of different categories exhibit subtle variations that are difficult to distinguish visually. In this paper, we present a framework empowered with large language models (LLMs) to address these challenges in food recognition. We first leverage LLMs to parse food images to generate food titles and ingredients. Then, we project the generated texts and food images from different domains to a shared embedding space to maximize the pair similarities. Finally, we take the aligned features of both modalities for recognition. With this simple framework, we show that our proposed approach can outperform the existing approaches tailored for long-tailed data distribution, domain adaptation, and fine-grained classification, respectively, on two food datasets.
Similar Papers
Are Vision-Language Models Ready for Dietary Assessment? Exploring the Next Frontier in AI-Powered Food Image Recognition
CV and Pattern Recognition
Lets phones guess what you ate from pictures.
Deep Learning-Driven Multimodal Detection and Movement Analysis of Objects in Culinary
CV and Pattern Recognition
Cooks follow recipes by watching and listening.
Towards Unbiased Cross-Modal Representation Learning for Food Image-to-Recipe Retrieval
CV and Pattern Recognition
Find recipes from food pictures better.