Challenges and Applications of Large Language Models: A Comparison of GPT and DeepSeek family of models
By: Shubham Sharma, Sneha Tuli, Narendra Badam
Potential Business Impact:
Compares AI models for better use.
Large Language Models (LLMs) are transforming AI across industries, but their development and deployment remain complex. This survey reviews 16 key challenges in building and using LLMs and examines how these challenges are addressed by two state-of-the-art models with unique approaches: OpenAI's closed source GPT-4o (May 2024 update) and DeepSeek-V3-0324 (March 2025), a large open source Mixture-of-Experts model. Through this comparison, we showcase the trade-offs between closed source models (robust safety, fine-tuned reliability) and open source models (efficiency, adaptability). We also explore LLM applications across different domains (from chatbots and coding tools to healthcare and education), highlighting which model attributes are best suited for each use case. This article aims to guide AI researchers, developers, and decision-makers in understanding current LLM capabilities, limitations, and best practices.
Similar Papers
Large Language Models for Education and Research: An Empirical and User Survey-based Analysis
Artificial Intelligence
Helps students and researchers learn and solve problems.
Comparison of Large Language Models for Deployment Requirements
Computation and Language
Helps pick the best AI for your needs.
A Comparison of DeepSeek and Other LLMs
Computation and Language
Tests AI writing, finds DeepSeek better than most.