Five Years of SciCap: What We Learned and Future Directions for Scientific Figure Captioning
By: Ting-Hao K. Huang , Ryan A. Rossi , Sungchul Kim and more
Potential Business Impact:
Helps scientists write clearer picture explanations.
Between 2021 and 2025, the SciCap project grew from a small seed-funded idea at The Pennsylvania State University (Penn State) into one of the central efforts shaping the scientific figure-captioning landscape. Supported by a Penn State seed grant, Adobe, and the Alfred P. Sloan Foundation, what began as our attempt to test whether domain-specific training, which was successful in text models like SciBERT, could also work for figure captions expanded into a multi-institution collaboration. Over these five years, we curated, released, and continually updated a large collection of figure-caption pairs from arXiv papers, conducted extensive automatic and human evaluations on both generated and author-written captions, navigated the rapid rise of large language models (LLMs), launched annual challenges, and built interactive systems that help scientists write better captions. In this piece, we look back at the first five years of SciCap and summarize the key technical and methodological lessons we learned. We then outline five major unsolved challenges and propose directions for the next phase of research in scientific figure captioning.
Similar Papers
Leveraging Author-Specific Context for Scientific Figure Caption Generation: 3rd SciCap Challenge
Computation and Language
Writes better picture descriptions for science papers.
Do Large Multimodal Models Solve Caption Generation for Scientific Figures? Lessons Learned from SciCap Challenge 2023
Computation and Language
Lets computers write better science picture captions.
Multi-LLM Collaborative Caption Generation in Scientific Documents
Computation and Language
Makes computer pictures tell better stories.