TEACH: Text Encoding as Curriculum Hints for Scene Text Recognition
By: Xiahan Yang, Hui Zheng
Potential Business Impact:
Helps computers read text in pictures better.
Scene Text Recognition (STR) remains a challenging task due to complex visual appearances and limited semantic priors. We propose TEACH, a novel training paradigm that injects ground-truth text into the model as auxiliary input and progressively reduces its influence during training. By encoding target labels into the embedding space and applying loss-aware masking, TEACH simulates a curriculum learning process that guides the model from label-dependent learning to fully visual recognition. Unlike language model-based approaches, TEACH requires no external pretraining and introduces no inference overhead. It is model-agnostic and can be seamlessly integrated into existing encoder-decoder frameworks. Extensive experiments across multiple public benchmarks show that models trained with TEACH achieve consistently improved accuracy, especially under challenging conditions, validating its robustness and general applicability.
Similar Papers
Text-Guided Semantic Image Encoder
CV and Pattern Recognition
Helps computers understand pictures better based on questions.
TeRA: Rethinking Text-guided Realistic 3D Avatar Generation
CV and Pattern Recognition
Creates realistic 3D people from text descriptions.
HTR-ConvText: Leveraging Convolution and Textual Information for Handwritten Text Recognition
CV and Pattern Recognition
Helps computers read messy handwriting better.