ManufactuBERT: Efficient Continual Pretraining for Manufacturing
By: Robin Armingaud, Romaric Besançon
Potential Business Impact:
Teaches computers factory words for better understanding.
While large general-purpose Transformer-based encoders excel at general language understanding, their performance diminishes in specialized domains like manufacturing due to a lack of exposure to domain-specific terminology and semantics. In this paper, we address this gap by introducing ManufactuBERT, a RoBERTa model continually pretrained on a large-scale corpus curated for the manufacturing domain. We present a comprehensive data processing pipeline to create this corpus from web data, involving an initial domain-specific filtering step followed by a multi-stage deduplication process that removes redundancies. Our experiments show that ManufactuBERT establishes a new state-of-the-art on a range of manufacturing-related NLP tasks, outperforming strong specialized baselines. More importantly, we demonstrate that training on our carefully deduplicated corpus significantly accelerates convergence, leading to a 33\% reduction in training time and computational cost compared to training on the non-deduplicated dataset. The proposed pipeline offers a reproducible example for developing high-performing encoders in other specialized domains. We will release our model and curated corpus at https://huggingface.co/cea-list-ia.
Similar Papers
Patent Language Model Pretraining with ModernBERT
Computation and Language
Helps computers understand patent language faster.
Arce: Augmented Roberta with Contextualized Elucidations for Ner in Automated Rule Checking
Computation and Language
Helps computers understand building plans better.
SecureBERT 2.0: Advanced Language Model for Cybersecurity Intelligence
Cryptography and Security
Helps computers understand computer security threats better.