CEC-Zero: Zero-Supervision Character Error Correction with Self-Generated Rewards
By: Zhiming Lin , Kai Zhao , Sophie Zhang and more
Large-scale Chinese spelling correction (CSC) remains critical for real-world text processing, yet existing LLMs and supervised methods lack robustness to novel errors and rely on costly annotations. We introduce CEC-Zero, a zero-supervision reinforcement learning framework that addresses this by enabling LLMs to correct their own mistakes. CEC-Zero synthesizes errorful inputs from clean text, computes cluster-consensus rewards via semantic similarity and candidate agreement, and optimizes the policy with PPO. It outperforms supervised baselines by 10--13 F$_1$ points and strong LLM fine-tunes by 5--8 points across 9 benchmarks, with theoretical guarantees of unbiased rewards and convergence. CEC-Zero establishes a label-free paradigm for robust, scalable CSC, unlocking LLM potential in noisy text pipelines.
Similar Papers
CEC-Zero: Chinese Error Correction Solution Based on LLM
Computation and Language
Teaches computers to fix Chinese text errors alone.
A Training-free LLM-based Approach to General Chinese Character Error Correction
Computation and Language
Fixes all Chinese typing mistakes, even missing ones.
SSR-Zero: Simple Self-Rewarding Reinforcement Learning for Machine Translation
Computation and Language
Translates languages better without human help.