Semi-Supervised Learning for Large Language Models Safety and Content Moderation
By: Eduard Stefan Dinuta, Iustin Sirbu, Traian Rebedea
Potential Business Impact:
Teaches AI to be safer with less data.
Safety for Large Language Models (LLMs) has been an ongoing research focus since their emergence and is even more relevant nowadays with the increasing capacity of those models. Currently, there are several guardrails in place for all public LLMs and multiple proposed datasets for training safety classifiers. However, training these safety classifiers relies on large quantities of labeled data, which can be problematic to acquire, prone to labeling errors, or often include synthetic data. To address these issues, we suggest a different approach: utilizing semi-supervised learning techniques, which leverage both labeled and unlabeled data, to improve the performance on the safety task. We analyze the improvements that these techniques can offer for both prompts given to Large Language Models and the responses to those requests. Moreover, since augmentation is the central part of semi-supervised algorithms, we demonstrate the importance of using task-specific augmentations, which significantly increase the performance when compared to general-purpose augmentation techniques.
Similar Papers
The Problem with Safety Classification is not just the Models
Computation and Language
Makes AI safer for everyone, everywhere.
Unforgotten Safety: Preserving Safety Alignment of Large Language Models with Continual Learning
Computation and Language
Keeps smart computer programs safe when learning new things.
A Survey on Data Security in Large Language Models
Cryptography and Security
Protects smart computer programs from bad data.