Score: 0

LLMs-based Augmentation for Domain Adaptation in Long-tailed Food Datasets

Published: November 20, 2025 | arXiv ID: 2511.16037v1

By: Qing Wang , Chong-Wah Ngo , Ee-Peng Lim and more

Potential Business Impact:

Lets phones know what food you're eating.

Business Areas:
Image Recognition Data and Analytics, Software

Training a model for food recognition is challenging because the training samples, which are typically crawled from the Internet, are visually different from the pictures captured by users in the free-living environment. In addition to this domain-shift problem, the real-world food datasets tend to be long-tailed distributed and some dishes of different categories exhibit subtle variations that are difficult to distinguish visually. In this paper, we present a framework empowered with large language models (LLMs) to address these challenges in food recognition. We first leverage LLMs to parse food images to generate food titles and ingredients. Then, we project the generated texts and food images from different domains to a shared embedding space to maximize the pair similarities. Finally, we take the aligned features of both modalities for recognition. With this simple framework, we show that our proposed approach can outperform the existing approaches tailored for long-tailed data distribution, domain adaptation, and fine-grained classification, respectively, on two food datasets.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition