Large Language Models for Detecting Cyberattacks on Smart Grid Protective Relays
By: Ahmad Mohammad Saber , Saeed Jafari , Zhengmao Ouyang and more
Potential Business Impact:
Finds fake signals to stop power grid attacks.
This paper presents a large language model (LLM)-based framework for detecting cyberattacks on transformer current differential relays (TCDRs), which, if undetected, may trigger false tripping of critical transformers. The proposed approach adapts and fine-tunes compact LLMs such as DistilBERT to distinguish cyberattacks from actual faults using textualized multidimensional TCDR current measurements recorded before and after tripping. Our results demonstrate that DistilBERT detects 97.6% of cyberattacks without compromising TCDR dependability and achieves inference latency below 6 ms on a commercial workstation. Additional evaluations confirm the framework's robustness under combined time-synchronization and false-data-injection attacks, resilience to measurement noise, and stability across prompt formulation variants. Furthermore, GPT-2 and DistilBERT+LoRA achieve comparable performance, highlighting the potential of LLMs for enhancing smart grid cybersecurity. We provide the full dataset used in this study for reproducibility.
Similar Papers
Large Language Model-Based Framework for Explainable Cyberattack Detection in Automatic Generation Control Systems
Cryptography and Security
Explains cyberattacks so grid operators understand.
Large Language Model-Based Framework for Explainable Cyberattack Detection in Automatic Generation Control Systems
Cryptography and Security
Explains cyberattacks in power grids so people understand.
Risk Assessment and Security Analysis of Large Language Models
Cryptography and Security
Protects smart computer programs from bad uses.