Score: 0

Five Years of SciCap: What We Learned and Future Directions for Scientific Figure Captioning

Published: December 25, 2025 | arXiv ID: 2512.21789v1

By: Ting-Hao K. Huang , Ryan A. Rossi , Sungchul Kim and more

Potential Business Impact:

Helps scientists write clearer picture explanations.

Business Areas:
Image Recognition Data and Analytics, Software

Between 2021 and 2025, the SciCap project grew from a small seed-funded idea at The Pennsylvania State University (Penn State) into one of the central efforts shaping the scientific figure-captioning landscape. Supported by a Penn State seed grant, Adobe, and the Alfred P. Sloan Foundation, what began as our attempt to test whether domain-specific training, which was successful in text models like SciBERT, could also work for figure captions expanded into a multi-institution collaboration. Over these five years, we curated, released, and continually updated a large collection of figure-caption pairs from arXiv papers, conducted extensive automatic and human evaluations on both generated and author-written captions, navigated the rapid rise of large language models (LLMs), launched annual challenges, and built interactive systems that help scientists write better captions. In this piece, we look back at the first five years of SciCap and summarize the key technical and methodological lessons we learned. We then outline five major unsolved challenges and propose directions for the next phase of research in scientific figure captioning.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Computation and Language