Score: 1

ZKP-FedEval: Verifiable and Privacy-Preserving Federated Evaluation using Zero-Knowledge Proofs

Published: July 15, 2025 | arXiv ID: 2507.11649v2

By: Daniel Commey , Benjamin Appiah , Griffith S. Klogo and more

Potential Business Impact:

Keeps private data safe during AI training checks.

Federated Learning (FL) enables collaborative model training on decentralized data without exposing raw data. However, the evaluation phase in FL may leak sensitive information through shared performance metrics. In this paper, we propose a novel protocol that incorporates Zero-Knowledge Proofs (ZKPs) to enable privacy-preserving and verifiable evaluation for FL. Instead of revealing raw loss values, clients generate a succinct proof asserting that their local loss is below a predefined threshold. Our approach is implemented without reliance on external APIs, using self-contained modules for federated learning simulation, ZKP circuit design, and experimental evaluation on both the MNIST and Human Activity Recognition (HAR) datasets. We focus on a threshold-based proof for a simple Convolutional Neural Network (CNN) model (for MNIST) and a multi-layer perceptron (MLP) model (for HAR), and evaluate the approach in terms of computational overhead, communication cost, and verifiability.

Country of Origin
πŸ‡¬πŸ‡­ πŸ‡ΊπŸ‡Έ Ghana, United States

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)