Towards Secure and Private Language Models for Nuclear Power Plants
By: Muhammad Anwar , Mishca de Costa , Issam Hammad and more
Potential Business Impact:
Teaches computers nuclear power words for safety.
This paper introduces a domain-specific Large Language Model for nuclear applications, built from the publicly accessible Essential CANDU textbook. Drawing on a compact Transformer-based architecture, the model is trained on a single GPU to protect the sensitive data inherent in nuclear operations. Despite relying on a relatively small dataset, it shows encouraging signs of capturing specialized nuclear vocabulary, though the generated text sometimes lacks syntactic coherence. By focusing exclusively on nuclear content, this approach demonstrates the feasibility of in-house LLM solutions that align with rigorous cybersecurity and data confidentiality standards. Early successes in text generation underscore the model's utility for specialized tasks, while also revealing the need for richer corpora, more sophisticated preprocessing, and instruction fine-tuning to enhance domain accuracy. Future directions include extending the dataset to cover diverse nuclear subtopics, refining tokenization to reduce noise, and systematically evaluating the model's readiness for real-world applications in nuclear domain.
Similar Papers
Unlocking the Potential of Large Language Models in the Nuclear Industry with Synthetic Data
Computation and Language
Makes nuclear information usable for smart computer programs.
Mechanistic Interpretability of LoRA-Adapted Language Models for Nuclear Reactor Safety Applications
Machine Learning (CS)
Shows how smart computers learn nuclear power secrets.
A Survey on Data Security in Large Language Models
Cryptography and Security
Protects smart computer programs from bad data.