Resilient Biosecurity in the Era of AI-Enabled Bioweapons
By: Jonathan Feldman, Tal Feldman
Potential Business Impact:
AI can't reliably spot dangerous new proteins.
Recent advances in generative biology have enabled the design of novel proteins, creating significant opportunities for drug discovery while also introducing new risks, including the potential development of synthetic bioweapons. Existing biosafety measures primarily rely on inference-time filters such as sequence alignment and protein-protein interaction (PPI) prediction to detect dangerous outputs. In this study, we evaluate the performance of three leading PPI prediction tools: AlphaFold 3, AF3Complex, and SpatialPPIv2. These models were tested on well-characterized viral-host interactions, such as those involving Hepatitis B and SARS-CoV-2. Despite being trained on many of the same viruses, the models fail to detect a substantial number of known interactions. Strikingly, none of the tools successfully identify any of the four experimentally validated SARS-CoV-2 mutants with confirmed binding. These findings suggest that current predictive filters are inadequate for reliably flagging even known biological threats and are even more unlikely to detect novel ones. We argue for a shift toward response-oriented infrastructure, including rapid experimental validation, adaptable biomanufacturing, and regulatory frameworks capable of operating at the speed of AI-driven developments.
Similar Papers
Contemporary AI foundation models increase biological weapons risk
Computers and Society
AI can help make dangerous germs.
Generative AI for Biosciences: Emerging Threats and Roadmap to Biosecurity
Cryptography and Security
Protects biology AI from being used for harm.
AI-based Methods for Simulating, Sampling, and Predicting Protein Ensembles
Biomolecules
AI predicts how proteins move and change shape.