Digital Sovereignty Control Framework for Military AI-based Cyber Security
By: Clara Maathuis, Kasper Cools
Potential Business Impact:
Keeps military computer secrets safe from hackers.
In today's evolving threat landscape, ensuring digital sovereignty has become mandatory for military organizations, especially given their increased development and investment in AI-driven cyber security solutions. To this end, a multi-angled framework is proposed in this article in order to define and assess digital sovereign control of data and AI-based models for military cyber security. This framework focuses on aspects such as context, autonomy, stakeholder involvement, and mitigation of risks in this domain. Grounded on the concepts of digital sovereignty and data sovereignty, the framework aims to protect sensitive defence assets against threats such as unauthorized access, ransomware, and supply-chain attacks. This approach reflects the multifaceted nature of digital sovereignty by preserving operational autonomy, assuring security and safety, securing privacy, and fostering ethical compliance of both military systems and decision-makers. At the same time, the framework addresses interoperability challenges among allied forces, strategic and legal considerations, and the integration of emerging technologies by considering a multidisciplinary approach that enhances the resilience and preservation of control over (critical) digital assets. This is done by adopting a design oriented research where systematic literature review is merged with critical thinking and analysis of field incidents in order to assure the effectivity and realism of the framework proposed.
Similar Papers
A Framework for the Assurance of AI-Enabled Systems
Artificial Intelligence
Makes military AI safe and trustworthy for use.
Sovereign AI: Rethinking Autonomy in the Age of Global Interdependence
Computers and Society
Helps countries control AI without cutting it off.
Cisco Integrated AI Security and Safety Framework Report
Cryptography and Security
Protects AI from being tricked or broken.