SciLaD: A Large-Scale, Transparent, Reproducible Dataset for Natural Scientific Language Processing
By: Luca Foppiano , Sotaro Takeshita , Pedro Ortiz Suarez and more
Potential Business Impact:
Helps computers understand millions of science papers.
SciLaD is a novel, large-scale dataset of scientific language constructed entirely using open-source frameworks and publicly available data sources. It comprises a curated English split containing over 10 million scientific publications and a multilingual, unfiltered TEI XML split including more than 35 million publications. We also publish the extensible pipeline for generating SciLaD. The dataset construction and processing workflow demonstrates how open-source tools can enable large-scale, scientific data curation while maintaining high data quality. Finally, we pre-train a RoBERTa model on our dataset and evaluate it across a comprehensive set of benchmarks, achieving performance comparable to other scientific language models of similar size, validating the quality and utility of SciLaD. We publish the dataset and evaluation pipeline to promote reproducibility, transparency, and further research in natural scientific language processing and understanding including scholarly document processing.
Similar Papers
A Survey of Scientific Large Language Models: From Data Foundations to Agent Frontiers
Computation and Language
AI helps scientists discover new things faster.
SciDA: Scientific Dynamic Assessor of LLMs
Computation and Language
Tests if computers can truly solve math problems.
Dynaword: From One-shot to Continuously Developed Datasets
Computation and Language
Builds better language tools with shared, updated words.