A benchmark multimodal oro-dental dataset for large vision-language models
By: Haoxin Lv , Ijazul Haq , Jin Du and more
Potential Business Impact:
Helps computers understand teeth problems from pictures.
The advancement of artificial intelligence in oral healthcare relies on the availability of large-scale multimodal datasets that capture the complexity of clinical practice. In this paper, we present a comprehensive multimodal dataset, comprising 8775 dental checkups from 4800 patients collected over eight years (2018-2025), with patients ranging from 10 to 90 years of age. The dataset includes 50000 intraoral images, 8056 radiographs, and detailed textual records, including diagnoses, treatment plans, and follow-up notes. The data were collected under standard ethical guidelines and annotated for benchmarking. To demonstrate its utility, we fine-tuned state-of-the-art large vision-language models, Qwen-VL 3B and 7B, and evaluated them on two tasks: classification of six oro-dental anomalies and generation of complete diagnostic reports from multimodal inputs. We compared the fine-tuned models with their base counterparts and GPT-4o. The fine-tuned models achieved substantial gains over these baselines, validating the dataset and underscoring its effectiveness in advancing AI-driven oro-dental healthcare solutions. The dataset is publicly available, providing an essential resource for future research in AI dentistry.
Similar Papers
Towards Better Dental AI: A Multimodal Benchmark and Instruction Dataset for Panoramic X-ray Analysis
CV and Pattern Recognition
Helps dentists understand X-rays better.
DentalGPT: Incentivizing Multimodal Complex Reasoning in Dentistry
CV and Pattern Recognition
Helps dentists find problems in teeth better.
OralGPT-Omni: A Versatile Dental Multimodal Large Language Model
CV and Pattern Recognition
Helps dentists find problems in teeth pictures.