Large Language Models for Education and Research: An Empirical and User Survey-based Analysis
By: Md Mostafizer Rahman , Ariful Islam Shiplu , Md Faizul Ibne Amin and more
Pretrained Large Language Models (LLMs) have achieved remarkable success across diverse domains, with education and research emerging as particularly impactful areas. Among current state-of-the-art LLMs, ChatGPT and DeepSeek exhibit strong capabilities in mathematics, science, medicine, literature, and programming. In this study, we present a comprehensive evaluation of these two LLMs through background technology analysis, empirical experiments, and a real-world user survey. The evaluation explores trade-offs among model accuracy, computational efficiency, and user experience in educational and research affairs. We benchmarked these LLMs performance in text generation, programming, and specialized problem-solving. Experimental results show that ChatGPT excels in general language understanding and text generation, while DeepSeek demonstrates superior performance in programming tasks due to its efficiency-focused design. Moreover, both models deliver medically accurate diagnostic outputs and effectively solve complex mathematical problems. Complementing these quantitative findings, a survey of students, educators, and researchers highlights the practical benefits and limitations of these models, offering deeper insights into their role in advancing education and research.
Similar Papers
Challenges and Applications of Large Language Models: A Comparison of GPT and DeepSeek family of models
Computation and Language
Compares AI models for better use.
Benchmarking Large Language Models for Personalized Guidance in AI-Enhanced Learning
Artificial Intelligence
Helps AI tutors give better, personalized learning help.
Analyzing Prominent LLMs: An Empirical Study of Performance and Complexity in Solving LeetCode Problems
Software Engineering
Helps coders pick the best AI for writing code.