Score: 1

Semi-Supervised Learning for Large Language Models Safety and Content Moderation

Published: December 24, 2025 | arXiv ID: 2512.21107v1

By: Eduard Stefan Dinuta, Iustin Sirbu, Traian Rebedea

Potential Business Impact:

Teaches AI to be safer with less data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Safety for Large Language Models (LLMs) has been an ongoing research focus since their emergence and is even more relevant nowadays with the increasing capacity of those models. Currently, there are several guardrails in place for all public LLMs and multiple proposed datasets for training safety classifiers. However, training these safety classifiers relies on large quantities of labeled data, which can be problematic to acquire, prone to labeling errors, or often include synthetic data. To address these issues, we suggest a different approach: utilizing semi-supervised learning techniques, which leverage both labeled and unlabeled data, to improve the performance on the safety task. We analyze the improvements that these techniques can offer for both prompts given to Large Language Models and the responses to those requests. Moreover, since augmentation is the central part of semi-supervised algorithms, we demonstrate the importance of using task-specific augmentations, which significantly increase the performance when compared to general-purpose augmentation techniques.

Page Count
5 pages

Category
Computer Science:
Computation and Language