Explicating Tacit Regulatory Knowledge from LLMs to Auto-Formalize Requirements for Compliance Test Case Generation
By: Zhiyi Xue, Xiaohong Chen, Min Zhang
Compliance testing in highly regulated domains is crucial but largely manual, requiring domain experts to translate complex regulations into executable test cases. While large language models (LLMs) show promise for automation, their susceptibility to hallucinations limits reliable application. Existing hybrid approaches mitigate this issue by constraining LLMs with formal models, but still rely on costly manual modeling. To solve this problem, this paper proposes RAFT, a framework for requirements auto-formalization and compliance test generation via explicating tacit regulatory knowledge from multiple LLMs. RAFT employs an Adaptive Purification-Aggregation strategy to explicate tacit regulatory knowledge from multiple LLMs and integrate it into three artifacts: a domain meta-model, a formal requirements representation, and testability constraints. These artifacts are then dynamically injected into prompts to guide high-precision requirement formalization and automated test generation. Experiments across financial, automotive, and power domains show that RAFT achieves expert-level performance, substantially outperforms state-of-the-art (SOTA) methods while reducing overall generation and review time.
Similar Papers
Evaluating Implicit Regulatory Compliance in LLM Tool Invocation via Logic-Guided Synthesis
Computation and Language
Teaches AI to follow safety rules, not just tasks.
Towards Autoformalization of LLM-generated Outputs for Requirement Verification
Computation and Language
Checks if computer writing matches what we want.
Validating Formal Specifications with LLM-generated Test Cases
Software Engineering
Helps computers find mistakes in code automatically.