Can Agentic AI Match the Performance of Human Data Scientists?
By: An Luo , Jin Du , Fangqiao Tian and more
Data science plays a critical role in transforming complex data into actionable insights across numerous domains. Recent developments in large language models (LLMs) have significantly automated data science workflows, but a fundamental question persists: Can these agentic AI systems truly match the performance of human data scientists who routinely leverage domain-specific knowledge? We explore this question by designing a prediction task where a crucial latent variable is hidden in relevant image data instead of tabular features. As a result, agentic AI that generates generic codes for modeling tabular data cannot perform well, while human experts could identify the important hidden variable using domain knowledge. We demonstrate this idea with a synthetic dataset for property insurance. Our experiments show that agentic AI that relies on generic analytics workflow falls short of methods that use domain-specific insights. This highlights a key limitation of the current agentic AI for data science and underscores the need for future research to develop agentic AI systems that can better recognize and incorporate domain knowledge.
Similar Papers
From AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery
Machine Learning (CS)
AI now does science experiments by itself.
The AI Data Scientist
Artificial Intelligence
AI finds answers in data fast.
AI, Humans, and Data Science: Optimizing Roles Across Workflows and the Workforce
Computers and Society
AI helps scientists do research faster, but needs careful use.