Score: 3

CovertComBench: The First Domain-Specific Testbed for LLMs in Wireless Covert Communication

Published: January 26, 2026 | arXiv ID: 2601.18315v1

By: Zhaozhi Liu , Jiaxin Chen , Yuanai Xie and more

Potential Business Impact:

Teaches AI to hide secret messages securely.

Business Areas:
Darknet Internet Services

The integration of Large Language Models (LLMs) into wireless networks presents significant potential for automating system design. However, unlike conventional throughput maximization, Covert Communication (CC) requires optimizing transmission utility under strict detection-theoretic constraints, such as Kullback-Leibler divergence limits. Existing benchmarks primarily focus on general reasoning or standard communication tasks and do not adequately evaluate the ability of LLMs to satisfy these rigorous security constraints. To address this limitation, we introduce CovertComBench, a unified benchmark designed to assess LLM capabilities across the CC pipeline, encompassing conceptual understanding (MCQs), optimization derivation (ODQs), and code generation (CGQs). Furthermore, we analyze the reliability of automated scoring within a detection-theoretic ``LLM-as-Judge'' framework. Extensive evaluations across state-of-the-art models reveal a significant performance discrepancy. While LLMs achieve high accuracy in conceptual identification (81%) and code implementation (83%), their performance in the higher-order mathematical derivations necessary for security guarantees ranges between 18% and 55%. This limitation indicates that current LLMs serve better as implementation assistants rather than autonomous solvers for security-constrained optimization. These findings suggest that future research should focus on external tool augmentation to build trustworthy wireless AI systems.

Country of Origin
🇭🇰 🇨🇳 🇸🇬 Singapore, Hong Kong, China

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Networking and Internet Architecture