DSBC : Data Science task Benchmarking with Context engineering
By: Ram Mohan Rao Kadiyala , Siddhant Gupta , Jebish Purbey and more
Potential Business Impact:
Tests smart computer helpers for data jobs.
Recent advances in large language models (LLMs) have significantly impacted data science workflows, giving rise to specialized data science agents designed to automate analytical tasks. Despite rapid adoption, systematic benchmarks evaluating the efficacy and limitations of these agents remain scarce. In this paper, we introduce a comprehensive benchmark specifically crafted to reflect real-world user interactions with data science agents by observing usage of our commercial applications. We evaluate three LLMs: Claude-4.0-Sonnet, Gemini-2.5-Flash, and OpenAI-o4-Mini across three approaches: zero-shot with context engineering, multi-step with context engineering, and with SmolAgent. Our benchmark assesses performance across a diverse set of eight data science task categories, additionally exploring the sensitivity of models to common prompting issues, such as data leakage and slightly ambiguous instructions. We further investigate the influence of temperature parameters on overall and task-specific outcomes for each model and approach. Our findings reveal distinct performance disparities among the evaluated models and methodologies, highlighting critical factors that affect practical deployment. The benchmark dataset and evaluation framework introduced herein aim to provide a foundation for future research of more robust and effective data science agents.
Similar Papers
DSBC : Data Science task Benchmarking with Context engineering
Artificial Intelligence
Tests smart computer helpers for data jobs.
DataSciBench: An LLM Agent Benchmark for Data Science
Computation and Language
Tests how well AI understands data science tasks.
IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis
Computation and Language
Tests computers on tricky, step-by-step data problems.