Score: 0

DSBC : Data Science task Benchmarking with Context engineering

Published: July 31, 2025 | arXiv ID: 2507.23336v1

By: Ram Mohan Rao Kadiyala , Siddhant Gupta , Jebish Purbey and more

Potential Business Impact:

Tests smart computer helpers for data jobs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advances in large language models (LLMs) have significantly impacted data science workflows, giving rise to specialized data science agents designed to automate analytical tasks. Despite rapid adoption, systematic benchmarks evaluating the efficacy and limitations of these agents remain scarce. In this paper, we introduce a comprehensive benchmark specifically crafted to reflect real-world user interactions with data science agents by observing usage of our commercial applications. We evaluate three LLMs: Claude-4.0-Sonnet, Gemini-2.5-Flash, and OpenAI-o4-Mini across three approaches: zero-shot with context engineering, multi-step with context engineering, and with SmolAgent. Our benchmark assesses performance across a diverse set of eight data science task categories, additionally exploring the sensitivity of models to common prompting issues, such as data leakage and slightly ambiguous instructions. We further investigate the influence of temperature parameters on overall and task-specific outcomes for each model and approach. Our findings reveal distinct performance disparities among the evaluated models and methodologies, highlighting critical factors that affect practical deployment. The benchmark dataset and evaluation framework introduced herein aim to provide a foundation for future research of more robust and effective data science agents.

Page Count
32 pages

Category
Computer Science:
Artificial Intelligence