Mar 28, 2025
Artificial intelligence (AI) has permeated the international system in a way that encourages research, collaboration, and learning across all levels of inquiry. The use of AI, built upon the power of Large Language Models, has the capability of lowering the knowledge threshold necessary for scientific discovery and research in biological areas of study. However, this dual-use technology can do as much harm as good in the hands of hostile actors. Soon, technological advances in AI will have the capability of accelerating the conceptualization, planning, and implementation of biological threats by lowering the knowledge threshold for the development and production of biological weapons by malign actors, requiring more robust regulation and oversight to mitigate the rising threat.
Today, the Global Terrorism Database records that over the past 50 years, only 36 attacks (out of 209,706) employed a biological weapon. Despite significant technological advances, biological weapons are difficult to produce due to the various complex steps of virus procurement, mass production of the agents without loss of pathogenicity, and efficient delivery. Yet there is growing concern that the power of AI can bypass these difficulties. LLMs have proven useful in providing assistance and troubleshooting various stages in production for traditional agents, such as anthrax, botulinum, and the plague. While current LLMs are unlikely to be of much use to subject-matter experts now, advances in these technologies could prove critical to individuals and organizations with enough resources to develop and produce biological weapons. Once deliverable, these weapons can injure and cause a significant number of casualties making them appealing to terrorist organizations and non-state actors.
At present, subject matter expertise is needed across all stages of designing, developing, and producing a biological weapon. However, reports from the RAND Cooperation and the
Center for a New American Security identify that LLMs can compensate for these educational and knowledge barriers by troubleshooting where previous testing has gone awry and quickening the design-build-test feedback loop. While AI cannot solve all the problems of building a biological weapon, such as procuring the agent and physically building the delivery system, it is assisting in ways that have historically been a challenging part of the cycle. As LLMs continue to develop, AI will only prove to be more competent in troubleshooting problems and improving the feedback loop.
While it may not be easily feasible for rogue actors to develop biological weapons, should they succeed in producing a viable weapon, the impact on society would be enormous. This is otherwise known as a black swan event. This is in part because there is no way to detect biological weapons before the attack leading first responders, law enforcement, and intelligence agencies to be reactive as opposed to proactive. While COVID-19 was not a biological weapon, the pandemic highlighted how devastating a global biological event can have on the economics and security of a country. The weaponization of viruses and illnesses dates back to the Middle Ages; in the modern era, the United States must build resilience towards biological weapons, whether they be weaponized pandemics or biological attacks.
As President Trump establishes priorities for his administration and begins to strengthen connections with America’s tech elite, it is important to consider the dangerous implications that AI can have for biological weapons. While current LLMs do not lend themselves as viable tools for streamlining the development and production of biological weapons, models are increasing in capability and scope every year. As biological weapons are not something militaries can counter with other conventional weapons, this requires nonconventional ways of mitigating the threat. This could include a whole-of-society approach, which focuses on community-led early warning systems, local adaptations, resiliency. At the federal level, this could also include stronger regulations on the sale and transfer of biological agents and toxins. Furthermore, oversight of academic scientific laboratories and cloud labs should be closely monitored for suspicious activity. By regulating biological agents and toxins, coupled with denying malign actors safe labs to experiment in, the United States would be better positioned to be proactive against the threat of biological weapons.
Advancements in artificial intelligence (AI) have the potential to be a key tool for malign actors to seriously pursue the development and production of biological weapons. Careful regulation and oversight of this technology and biological agents and equipment are necessary to deny malign actors the capability of creating biological black swan events leading to mass injuries and casualties. Underscoring these fears is that the detection of biological weapons is near impossible until after the attack has commenced, leading responders to be reactive, not proactive. As AI continues to develop and new models are implemented across various platforms, policymakers need to be mindful of these implications and identify ways to be proactive where possible.
Meredith Hutchens is currently pursuing her master’s in International Security at the Schar School of Policy and Government at George Mason University. She is also a network member at the Initiative for the Study of a Stable Peace. Her research interests include counterterrorism, intelligence, international relations, and conflict resolution with the goal of addressing today’s pressing security issues informed by economic analysis. Hutchens earned her B.S. in Economics and Mathematics with a concentration in Statistics from the College of Charleston where she was a Market Process Scholar at the Center for Public Choice and Market Process.