OSVBench: Benchmarking LLMs on Specification Generation Tasks for Operating System Verification
By: Shangyu Li , Juyong Jiang , Tiancheng Zhao and more
Potential Business Impact:
Tests if AI can write code for computer brains.
We introduce OSVBench, a new benchmark for evaluating Large Language Models (LLMs) in generating complete specification code pertaining to operating system kernel verification tasks. The benchmark first defines the specification generation problem into a program synthesis problem within a confined scope of syntax and semantics by providing LLMs with the programming model. The LLMs are required to understand the provided verification assumption and the potential syntax and semantics space to search for, then generate the complete specification for the potentially buggy operating system code implementation under the guidance of the high-level functional description of the operating system. This benchmark is built upon a real-world operating system kernel, Hyperkernel, and consists of 245 complex specification generation tasks in total, each is a long context task of about 20k-30k tokens. Our comprehensive evaluation of 12 LLMs exhibits the limited performance of the current LLMs on the specification generation tasks for operating system verification. Significant disparities in their performance on the benchmark highlight differences in their ability to handle long-context code generation tasks. The evaluation toolkit and benchmark are available at https://github.com/lishangyu-hkust/OSVBench.
Similar Papers
LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering
Software Engineering
Tests if AI can understand huge computer programs.
OSS-Bench: Benchmark Generator for Coding LLMs
Software Engineering
Tests AI code for bugs and safety.
QuanBench: Benchmarking Quantum Code Generation with Large Language Models
Software Engineering
Tests how well computers write quantum computer code.