Score: 0

Adapting Small Language Models to Low-Resource Domains: A Case Study in Hindi Tourism QA

Published: October 29, 2025 | arXiv ID: 2510.25273v1

By: Sandipan Majhi, Paheli Bhattacharya

Potential Business Impact:

Helps computers answer questions about Hindi tourism.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Domain-specific question answering in low-resource languages faces two key challenges: scarcity of annotated datasets and limited domain knowledge in general-purpose language models. In this work, we present a multi-stage finetuning strategy to adapt lightweight language models to the Hindi tourism domain by leveraging both original and synthetic training data. Synthetic question-answer pairs are generated using large LLMs (LLaMA-70B, Phi-14B) and used to augment the limited original dataset. We explore several training methodologies and analyse their impact on domain generalisation. Our results demonstrate that large models can efficiently generate synthetic data, while small models can effectively adapt to it, offering a scalable pathway for low-resource, domain-specific QA.

Country of Origin
🇮🇳 India

Page Count
7 pages

Category
Computer Science:
Computation and Language