Analyzing Prominent LLMs: An Empirical Study of Performance and Complexity in Solving LeetCode Problems
By: Everton Guimaraes , Nathalia Nascimento , Chandan Shivalingaiah and more
Potential Business Impact:
Helps coders pick the best AI for writing code.
Large Language Models (LLMs) like ChatGPT, Copilot, Gemini, and DeepSeek are transforming software engineering by automating key tasks, including code generation, testing, and debugging. As these models become integral to development workflows, a systematic comparison of their performance is essential for optimizing their use in real world applications. This study benchmarks these four prominent LLMs on one hundred and fifty LeetCode problems across easy, medium, and hard difficulties, generating solutions in Java and Python. We evaluate each model based on execution time, memory usage, and algorithmic complexity, revealing significant performance differences. ChatGPT demonstrates consistent efficiency in execution time and memory usage, while Copilot and DeepSeek show variability as task complexity increases. Gemini, although effective on simpler tasks, requires more attempts as problem difficulty rises. Our findings provide actionable insights into each model's strengths and limitations, offering guidance for developers selecting LLMs for specific coding tasks and providing insights on the performance and complexity of GPT-like generated solutions.
Similar Papers
Large Language Models for Education and Research: An Empirical and User Survey-based Analysis
Artificial Intelligence
Helps students and researchers learn and solve problems.
Evaluation of LLMs for mathematical problem solving
Artificial Intelligence
Computers solve harder math problems better.
Evaluating Code Generation of LLMs in Advanced Computer Science Problems
Artificial Intelligence
Helps computers write harder code, but not perfectly.