policy report
Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.
AI and Chemical, Biological, Radiological, & Nuclear Hazards
What are CBRN hazards? How could they be affected by AI models?
Humanity has developed technologies capable of mass destruction, and we need to be especially cautious about AI in relation to these technologies. These technologies and associated risks commonly fall into four main categories, collectively known as CBRN:
In this section, we’ll briefly contextualize current and upcoming examples of each of these types of hazards in the context of AI technologies.
What are potential chemical hazards arising from the increase in AI capabilities?
In particular, a prominent concern of experts is the potential for AI to lower the barrier of entry for non-experts to generate CBRN harms. That is, AI could make it easier for malicious or naive actors to build dangerous weapons, such as chemical agents with deadly properties.
For example, pharmaceutical researchers use machine learning models to identify new therapeutic drugs. In this study, a deep learning model was trained on ~2,500 molecules and their antibiotic activity. When shown chemicals outside that training set, the model could predict whether they would function as antibiotics.
However, training a model to generate novel safe and harmless medications is very close to, if not equivalent to, training a model to generate chemical weapons. This is an example of the Waluigi Effect; the underlying model is simply learning to predict toxicity, and this can be used to rule out harmful chemicals, or generate a list of them, ranked by deadliness. This was demonstrated by the Swiss Federal Institute for Nuclear, Biological, and Chemical Protection (see here for a non-paywalled summary). By telling the same model to generate harmful molecules, it generated a list of 40,000 such molecules in under 6 hours. These included deadly nerve agents such as VX, as well as previously undiscovered molecules that it ranked as more deadly than VX. To quote the researchers:
As AI models become more deeply integrated into the process of developing chemicals used for industrial and medical purposes, it will become increasingly accessible for malicious parties to use these models for dangerous means.
What are biological hazards arising from the increase in AI capabilities?
Recent papers have shown that large language models (LLMs) may lower barriers to biological misuse, by enabling the weaponization of biological agents. In particular, this may occur from the increasing application of LLMs as biological design tools (BDTs), such as multimodal lab assistants and autonomous science tools. These BDTs make it easier and faster to conduct laboratory work, supporting the work of non-experts and expanding the capabilities of sophisticated actors. Such abilities may produce “pandemic pathogens substantially more devastating than anything seen to date and could enable forms of more predictable and targeted biological weapons”. Further, the risks posed by LLMs and by custom AI trained for biological research can exacerbate each other by increasing the amount of harm an individual can do while providing access to those tools to a larger pool of individuals.
It’s important to note these risks are still unlikely with today’s cutting-edge LLMs, though this may not hold true for much longer. Two recent studies from RAND and OpenAI have found that current LLMs are not more prone to misuse than standard internet searches regarding biological and chemical weapons.
Another leading biological hazard of concern is synthetic biology – the genetic modification of individual cells or organisms, as well as the manufacture of synthetic DNA or RNA strands called synthetic nucleic acids.
This field poses a particularly urgent risk because existing infrastructure could theoretically be used by malicious actors to produce an extremely deadly pathogen, for example. Researchers are able to order custom DNA or RNA to be generated and mailed to them, a crucial step towards turning a theoretical pandemic-level design into an infectious reality. Currently, we urgently need mandatory screening of ordered material to ensure it won’t be harmful.
Some researchers are developing tools specifically to measure and reduce the capacity of AI models to lower barriers of entry for CBRN weapons and hazards, with a particular focus on biohazards. For example, OpenAI is developing “an early warning system for LLM-aided biological threat creation”, and a recent collaboration between several leading research organizations produced a practical policy proposal titled Towards Responsible Governance of Biological Design Tools. The Centre for AI Safety has also released the “Weapons of Mass Destruction Proxy”, which measures how particular LLMs can lower the barrier of entry for CBRN hazards more broadly. Tools and proposals such as these, developed with expert knowledge of CBRN hazards and AI engineering, are likely to be a crucial complement to legislative efforts.
For more context on these potential pandemic-level biological hazards, you can read:
What are radiological and nuclear hazards arising from the increase in AI capabilities?
A prominent and existential concern from many AI safety researchers is the risk of integrating AI technologies in the chain-of-command of nuclear weapons or nuclear power plants. As one example, it’s been proposed that AI could be used to monitor and maintain the activity of nuclear power plants.
Elsewhere, The Atlantic cites the Soviet Union’s Dead Hand as evidence that militaries could be tempted to use AI in the nuclear chain-of-command. Dead Hand is a system developed in 1985 that, if activated, would automatically launch a nuclear strike against the US if a command-and-control center stopped receiving communications from the Kremlin and detected radiation in Moscow’s atmosphere (a system which may still be operational).
As the reasoning of AI technology is still poorly understood and AI models have unpredictable decision-making abilities, it’s quite likely that such an integration may lead to unexpected and dangerous failure modes, which for nuclear technologies have catastrophic worst-case outcomes. As a result, many researchers argue that the risk of loss-of-control means we shouldn’t permit the usage of AI anywhere near nuclear technologies, such as decision-making regarding the nuclear launch codes or the storage and maintenance of nuclear weapons.
In proposed legislation, some policymakers have pushed for banning AI in nuclear arms development, such as a proposed pact from a UK MP and Senator Mitt Romney’s recent letter to the Senate AI working group. Romney’s letter proposes a framework to mitigate extreme risks by requiring powerful AIs to be licensed if they’re intended for chemical/bio-engineering or nuclear development. However, nothing binding has been passed into law. There have also been reports that the US and China are having discussions on limiting the use of AI in areas including nuclear weapons.
Current regulatory landscape
The US
The Executive Order on AI has several sections on CBRN hazards: several department secretaries are required to implement plans, reports, and proposals analyzing CBRN risks and how to mitigate them, and Section 4.4 specifically focuses on analyzing biological weapon risks and how to reduce them in the short-term. In full:
The following points are all part of 4.4, which is devoted to Reducing Risks at the Intersection of AI and CBRN Threats, with a particular focus on biological weapons:
China
China’s three most important AI regulations do not contain any specific provisions for CBRN hazards.
The EU
The EU AI Act does not contain any specific provisions for CBRN hazards, though article (60m) on the category of “general purpose AI that could pose systemic risks” includes the following mention of CBRN: “international approaches have so far identified the need to devote attention to risks from [...] chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use”.