Verifiable Dropout: Turning Randomness into a Verifiable Claim
By: Kichang Lee , Sungmin Lee , Jaeho Jin and more
Potential Business Impact:
Proves AI training used random numbers fairly.
Modern cloud-based AI training relies on extensive telemetry and logs to ensure accountability. While these audit trails enable retrospective inspection, they struggle to address the inherent non-determinism of deep learning. Stochastic operations, such as dropout, create an ambiguity surface where attackers can mask malicious manipulations as natural random variance, granting them plausible deniability. Consequently, existing logging mechanisms cannot verify whether stochastic values were generated and applied honestly without exposing sensitive training data. To close this integrity gap, we introduce Verifiable Dropout, a privacy-preserving mechanism based on zero-knowledge proofs. We treat stochasticity not as an excuse but as a verifiable claim. Our approach binds dropout masks to a deterministic, cryptographically verifiable seed and proves the correct execution of the dropout operation. This design enables users to audit the integrity of stochastic training steps post-hoc, ensuring that randomness was neither biased nor cherry-picked, while strictly preserving the confidentiality of the model and data.
Similar Papers
Zero-Knowledge Proof Based Verifiable Inference of Models
Cryptography and Security
Lets you check AI answers without seeing its secrets.
Efficient Public Verification of Private ML via Regularization
Machine Learning (CS)
Lets you check if private data stays private.
Convergence, design and training of continuous-time dropout as a random batch method
Machine Learning (CS)
Makes computer learning faster and more accurate.