Score: 0

On the Fragility of Contribution Score Computation in Federated Learning

Published: September 24, 2025 | arXiv ID: 2509.19921v2

By: Balazs Pejo , Marcell Frank , Krisztian Varga and more

Potential Business Impact:

Makes sure everyone gets fair credit in group learning.

Business Areas:
Crowdsourcing Collaboration

This paper investigates the fragility of contribution evaluation in federated learning, a critical mechanism for ensuring fairness and incentivizing participation. We argue that contribution scores are susceptible to significant distortions from two fundamental perspectives: architectural sensitivity and intentional manipulation. First, we explore how different model aggregation methods impact these scores. While most research assumes a basic averaging approach, we demonstrate that advanced techniques, including those designed to handle unreliable or diverse clients, can unintentionally yet significantly alter the final scores. Second, we explore vulnerabilities posed by poisoning attacks, where malicious participants strategically manipulate their model updates to inflate their own contribution scores or reduce the importance of other participants. Through extensive experiments across diverse datasets and model architectures, implemented within the Flower framework, we rigorously show that both the choice of aggregation method and the presence of attackers are potent vectors for distorting contribution scores, highlighting a critical need for more robust evaluation schemes.

Country of Origin
🇭🇺 Hungary

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)