A Patient-Doctor-NLP-System to contest inequality for less privileged
By: Subrit Dikshit, Ritu Tiwari, Priyank Jain
Potential Business Impact:
Helps doctors help people speaking Hindi.
Transfer Learning (TL) has accelerated the rapid development and availability of large language models (LLMs) for mainstream natural language processing (NLP) use cases. However, training and deploying such gigantic LLMs in resource-constrained, real-world healthcare situations remains challenging. This study addresses the limited support available to visually impaired users and speakers of low-resource languages such as Hindi who require medical assistance in rural environments. We propose PDFTEMRA (Performant Distilled Frequency Transformer Ensemble Model with Random Activations), a compact transformer-based architecture that integrates model distillation, frequency-domain modulation, ensemble learning, and randomized activation patterns to reduce computational cost while preserving language understanding performance. The model is trained and evaluated on medical question-answering and consultation datasets tailored to Hindi and accessibility scenarios, and its performance is compared against standard NLP state-of-the-art model baselines. Results demonstrate that PDFTEMRA achieves comparable performance with substantially lower computational requirements, indicating its suitability for accessible, inclusive, low-resource medical NLP applications.
Similar Papers
An Ensemble Classification Approach in A Multi-Layered Large Language Model Framework for Disease Prediction
Computation and Language
Helps doctors find diseases from online patient messages.
A Study of Large Language Models for Patient Information Extraction: Model Architecture, Fine-Tuning Strategy, and Multi-task Instruction Tuning
Computation and Language
Helps computers understand patient stories for better care.
Balancing Natural Language Processing Accuracy and Normalisation in Extracting Medical Insights
Artificial Intelligence
Helps doctors find patient info faster.