LMSpell: Neural Spell Checking for Low-Resource Languages
By: Akesh Gunathilakea , Nadil Karunarathnea , Tharusha Bandaranayakea and more
Potential Business Impact:
Fixes spelling errors in many languages.
Spell correction is still a challenging problem for low-resource languages (LRLs). While pretrained language models (PLMs) have been employed for spell correction, their use is still limited to a handful of languages, and there has been no proper comparison across PLMs. We present the first empirical study on the effectiveness of PLMs for spell correction, which includes LRLs. We find that Large Language Models (LLMs) outperform their counterparts (encoder-based and encoder-decoder) when the fine-tuning dataset is large. This observation holds even in languages for which the LLM is not pre-trained. We release LMSpell, an easy- to use spell correction toolkit across PLMs. It includes an evaluation function that compensates for the hallucination of LLMs. Further, we present a case study with Sinhala to shed light on the plight of spell correction for LRLs.
Similar Papers
Self-Correction Makes LLMs Better Parsers
Computation and Language
Teaches computers to understand sentences better.
SLMFix: Leveraging Small Language Models for Error Fixing with Reinforcement Learning
Software Engineering
Fixes computer code errors automatically for better programs.
Speech LLMs in Low-Resource Scenarios: Data Volume Requirements and the Impact of Pretraining on High-Resource Languages
Audio and Speech Processing
Helps computers understand quiet or rare languages.