StackPilot: Autonomous Function Agents for Scalable and Environment-Free Code Execution
By: Xinkui Zhao , Yifan Zhang , Zhengyi Zhou and more
Potential Business Impact:
Checks if computer code written by AI works.
Recent advances in large language models (LLMs) have substantially enhanced automated code generation across a wide range of programming languages. Nonetheless, verifying the correctness and executability of LLM-generated code remains a significant challenge, as traditional methods rely on language-specific compilers and environment-dependent runtimes. To overcome these limitations, we introduce StackPilot, an LLM-native, multi-agent framework designed for language-agnostic code verification and execution, which operates independently of conventional toolchains. StackPilot offers three principal innovations: (1) a Function-as-Agents paradigm, in which each function is modeled as an autonomous agent capable of fine-grained reasoning and collaborative verification; (2) an LLM-as-Executor strategy, which enables scalable verification via stack-based scheduling; and (3) a novel snapshot mechanism that preserves complete execution contexts, facilitating deterministic and lossless context switching during verification. Empirical evaluations demonstrate that StackPilot achieves framework reliability rates between 89% and 97%, substantially outperforming baseline approaches. These results indicate that StackPilot can reliably verify and execute a significantly larger proportion of LLM-generated code across diverse programming tasks compared to existing methods.
Similar Papers
TypePilot: Leveraging the Scala Type System for Secure LLM-generated Code
Computation and Language
Fixes computer code to stop security problems.
Agentic Auto-Scheduling: An Experimental Study of LLM-Guided Loop Optimization
Programming Languages
Makes computer programs run much faster.
A Survey on Code Generation with LLM-based Agents
Software Engineering
Computers write and fix computer programs themselves.