Score: 1

Fundamental Safety-Capability Trade-offs in Fine-tuning Large Language Models

Published: March 24, 2025 | arXiv ID: 2503.20807v1

By: Pin-Yu Chen , Han Shen , Payel Das and more

BigTech Affiliations: IBM

Potential Business Impact:

Makes AI smarter without making it unsafe.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Fine-tuning Large Language Models (LLMs) on some task-specific datasets has been a primary use of LLMs. However, it has been empirically observed that this approach to enhancing capability inevitably compromises safety, a phenomenon also known as the safety-capability trade-off in LLM fine-tuning. This paper presents a theoretical framework for understanding the interplay between safety and capability in two primary safety-aware LLM fine-tuning strategies, providing new insights into the effects of data similarity, context overlap, and alignment loss landscape. Our theoretical results characterize the fundamental limits of the safety-capability trade-off in LLM fine-tuning, which are also validated by numerical experiments.

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Statistics:
Machine Learning (Stat)