Score: 0

Human-Level Reasoning: A Comparative Study of Large Language Models on Logical and Abstract Reasoning

Published: October 28, 2025 | arXiv ID: 2510.24435v1

By: Benjamin Grando Moreira

Potential Business Impact:

Tests if AI can think like a person.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Evaluating reasoning ability in Large Language Models (LLMs) is important for advancing artificial intelligence, as it transcends mere linguistic task performance. It involves understanding whether these models truly understand information, perform inferences, and are able to draw conclusions in a logical and valid way. This study compare logical and abstract reasoning skills of several LLMs - including GPT, Claude, DeepSeek, Gemini, Grok, Llama, Mistral, Perplexity, and Sabi\'a - using a set of eight custom-designed reasoning questions. The LLM results are benchmarked against human performance on the same tasks, revealing significant differences and indicating areas where LLMs struggle with deduction.

Country of Origin
🇧🇷 Brazil

Page Count
12 pages

Category
Computer Science:
Artificial Intelligence