Benchmarking LLM-based Agents for Single-cell Omics Analysis
By: Yang Liu , Lu Zhou , Ruikun He and more
Potential Business Impact:
AI helps scientists understand cells better.
The surge in multimodal single-cell omics data exposes limitations in traditional, manually defined analysis workflows. AI agents offer a paradigm shift, enabling adaptive planning, executable code generation, traceable decisions, and real-time knowledge fusion. However, the lack of a comprehensive benchmark critically hinders progress. We introduce a novel benchmarking evaluation system to rigorously assess agent capabilities in single-cell omics analysis. This system comprises: a unified platform compatible with diverse agent frameworks and LLMs; multidimensional metrics assessing cognitive program synthesis, collaboration, execution efficiency, bioinformatics knowledge integration, and task completion quality; and 50 diverse real-world single-cell omics analysis tasks spanning multi-omics, species, and sequencing technologies. Our evaluation reveals that Grok-3-beta achieves state-of-the-art performance among tested agent frameworks. Multi-agent frameworks significantly enhance collaboration and execution efficiency over single-agent approaches through specialized role division. Attribution analyses of agent capabilities identify that high-quality code generation is crucial for task success, and self-reflection has the most significant overall impact, followed by retrieval-augmented generation (RAG) and planning. This work highlights persistent challenges in code generation, long-context handling, and context-aware knowledge retrieval, providing a critical empirical foundation and best practices for developing robust AI agents in computational biology.
Similar Papers
Continuous Benchmark Generation for Evaluating Enterprise-scale LLM Agents
Software Engineering
Creates better tests for smart computer helpers.
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI's ability to do real science research.
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI to help scientists discover new things faster.