Score: 1

Delta Activations: A Representation for Finetuned Large Language Models

Published: September 4, 2025 | arXiv ID: 2509.04442v1

By: Zhiqiu Xu , Amish Sethi , Mayur Naik and more

Potential Business Impact:

Organizes AI models by how they learn.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The success of powerful open source Large Language Models (LLMs) has enabled the community to create a vast collection of post-trained models adapted to specific tasks and domains. However, navigating and understanding these models remains challenging due to inconsistent metadata and unstructured repositories. We introduce Delta Activations, a method to represent finetuned models as vector embeddings by measuring shifts in their internal activations relative to a base model. This representation allows for effective clustering by domain and task, revealing structure in the model landscape. Delta Activations also demonstrate desirable properties: it is robust across finetuning settings and exhibits an additive property when finetuning datasets are mixed. In addition, we show that Delta Activations can embed tasks via few-shot finetuning, and further explore its use for model selection and merging. We hope Delta Activations can facilitate the practice of reusing publicly available models. Code is available at https://github.com/OscarXZQ/delta_activations.

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)