Evaluating Large Language Models on Multimodal Chemistry Olympiad Exams
By: Yiming Cui , Xin Yao , Yuxuan Qin and more
Potential Business Impact:
Helps AI understand chemistry pictures and words.
Multimodal scientific reasoning remains a significant challenge for large language models (LLMs), particularly in chemistry, where problem-solving relies on symbolic diagrams, molecular structures, and structured visual data. Here, we systematically evaluate 40 proprietary and open-source multimodal LLMs, including GPT-5, o3, Gemini-2.5-Pro, and Qwen2.5-VL, on a curated benchmark of Olympiad-style chemistry questions drawn from over two decades of U.S. National Chemistry Olympiad (USNCO) exams. These questions require integrated visual and textual reasoning across diverse modalities. We find that many models struggle with modality fusion, where in some cases, removing the image even improves accuracy, indicating misalignment in vision-language integration. Chain-of-Thought prompting consistently enhances both accuracy and visual grounding, as demonstrated through ablation studies and occlusion-based interpretability. Our results reveal critical limitations in the scientific reasoning abilities of current MLLMs, providing actionable strategies for developing more robust and interpretable multimodal systems in chemistry. This work provides a timely benchmark for measuring progress in domain-specific multimodal AI and underscores the need for further advances at the intersection of artificial intelligence and scientific reasoning.
Similar Papers
Scientific Reasoning: Assessment of Multimodal Generative LLMs
Computation and Language
Helps AI understand science questions better.
Assessing the Chemical Intelligence of Large Language Models
Machine Learning (CS)
Computers can now solve hard chemistry problems.
Evaluating Multi-Hop Reasoning in Large Language Models: A Chemistry-Centric Case Study
Computation and Language
Tests if AI can understand complex chemistry ideas.