Trustless Federated Learning at Edge-Scale: A Compositional Architecture for Decentralized, Verifiable, and Incentive-Aligned Coordination
By: Pius Onobhayedo, Paul Osemudiame Oamen
Potential Business Impact:
AI learns from everyone's data safely.
Artificial intelligence is retracing the Internet's path from centralized provision to distributed creation. Initially, resource-intensive computation concentrates within institutions capable of training and serving large models.Eventually, as federated learning matures, billions of edge devices holding sensitive data will be able to collectively improve models without surrendering raw information, enabling both contribution and consumption at scale. This democratic vision remains unrealized due to certain compositional gaps; aggregators handle updates without accountability, economic mechanisms are lacking and even when present remain vulnerable to gaming, coordination serializes state modifications limiting scalability, and governance permits retroactive manipulation. This work addresses these gaps by leveraging cryptographic receipts to prove aggregation correctness, geometric novelty measurement to prevent incentive gaming, parallel object ownership to achieve linear scalability, and time-locked policies to check retroactive manipulation.
Similar Papers
Incentive-Based Federated Learning: Architectural Elements and Future Directions
Machine Learning (CS)
Makes computers learn together without sharing private info.
Incentive-Based Federated Learning
Machine Learning (CS)
Makes computers learn together without sharing private info.
Federated Learning Survey: A Multi-Level Taxonomy of Aggregation Techniques, Experimental Insights, and Future Frontiers
Machine Learning (CS)
Lets computers learn together without sharing secrets.