Score: 1

Benchmarking Transferability: A Framework for Fair and Robust Evaluation

Published: April 28, 2025 | arXiv ID: 2504.20121v1

By: Alireza Kazemi, Helia Rezvani, Mahsa Baktashmotlagh

Potential Business Impact:

Tests computer learning to work in new situations.

Business Areas:
Test and Measurement Data and Analytics

Transferability scores aim to quantify how well a model trained on one domain generalizes to a target domain. Despite numerous methods proposed for measuring transferability, their reliability and practical usefulness remain inconclusive, often due to differing experimental setups, datasets, and assumptions. In this paper, we introduce a comprehensive benchmarking framework designed to systematically evaluate transferability scores across diverse settings. Through extensive experiments, we observe variations in how different metrics perform under various scenarios, suggesting that current evaluation practices may not fully capture each method's strengths and limitations. Our findings underscore the value of standardized assessment protocols, paving the way for more reliable transferability measures and better-informed model selection in cross-domain applications. Additionally, we achieved a 3.5\% improvement using our proposed metric for the head-training fine-tuning experimental setup. Our code is available in this repository: https://github.com/alizkzm/pert_robust_platform.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)