EVM-QuestBench: An Execution-Grounded Benchmark for Natural-Language Transaction Code Generation
By: Pei Yang , Wanyi Chen , Ke Wang and more
Potential Business Impact:
Tests AI for safe money transfers.
Large language models are increasingly applied to various development scenarios. However, in on-chain transaction scenarios, even a minor error can cause irreversible loss for users. Existing evaluations often overlook execution accuracy and safety. We introduce EVM-QuestBench, an execution-grounded benchmark for natural-language transaction-script generation on EVM-compatible chains. The benchmark employs dynamic evaluation: instructions are sampled from template pools, numeric parameters are drawn from predefined intervals, and validators verify outcomes against these instantiated values. EVM-QuestBench contains 107 tasks (62 atomic, 45 composite). Its modular architecture enables rapid task development. The runner executes scripts on a forked EVM chain with snapshot isolation; composite tasks apply step-efficiency decay. We evaluate 20 models and find large performance gaps, with split scores revealing persistent asymmetry between single-action precision and multi-step workflow completion. Code: https://anonymous.4open.science/r/bsc_quest_bench-A9CF/.
Similar Papers
QuanBench: Benchmarking Quantum Code Generation with Large Language Models
Software Engineering
Tests how well computers write quantum computer code.
ArenaBencher: Automatic Benchmark Evolution via Multi-Model Competitive Evaluation
Computation and Language
Makes AI tests harder to cheat on.
VeriEquivBench: An Equivalence Score for Ground-Truth-Free Evaluation of Formally Verifiable Code
Programming Languages
Checks computer code for mistakes automatically.