Could terrorists or other bad actors use artificial intelligence to create a deadly pandemic? Scientists at Harvard and the Massachusetts Institute of Technology conducted an experiment to find out last year.

Researchers asked a group of students, none of whom had specialized training in the life sciences, to use AI tools, such as OpenAI’s ChatGPT-4, to develop a plan for how to start a pandemic. In just an hour, participants learned how to procure and synthesize deadly pathogens like smallpox in ways that evade existing biosecurity systems.

AI cannot yet manufacture a national security crisis. As Jason Matheny at Rand reiterates, while biological know-how is becoming more widely accessible through AI, it’s not currently at a level that would substitute for a lack of biological research training. But as biotechnology becomes both more advanced -- think of Google DeepMind’s AlphaFold, which uses AI to predict how molecular structures will interact -- policymakers are understandably worried that it’ll be increasingly easy to create a bioweapon. So they’re starting to take action to regulate the emerging AI industry.

Their efforts are well-intentioned. But it’s critical that policymakers avoid focusing too narrowly on catastrophic risk and inadvertently hamstring the creation of positive AI tools that we need to tackle future crises. We should aim to strike a balance.

AI tools have enormous positive potential. For instance, AI technologies like AlphaFold and RFdiffusion have already made large strides in designing novel proteins that could be used for medical purposes. The same sort of technologies can also be used for evil, of course.

In a study published last year in the journal Nature Machine Intelligence, researchers demonstrated how the AI MegaSyn could generate 40,000 potential bioweapon chemicals in just six hours. Researchers asked the AI to identify molecules that are similar to VX, a highly lethal nerve agent. In some cases, MegaSyn devised compounds that were even more toxic.

It’s possible that bad actors could one day use such tools to engineer new pathogens far more contagious and deadly than any occurring in nature. Once a potential bioweapon is identified -- maybe with the help of AI -- a malicious actor could order a custom strand of DNA from a commercial provider, who would manufacture synthetic DNA in a lab and return it via mail. As experts at the Center for Security and Emerging Technology at Georgetown University has posited, perhaps that strand of genetic material “codes for a toxin or a gene that makes a pathogen more dangerous.

It’s even possible that a terrorist could evade detection by ordering small pieces of a dangerous genetic sequence, and then assemble a bioweapon from the component parts. Scientists frequently order synthesized DNA for projects like cancer and infectious disease research. But not all synthetic DNA providers screen orders or verify their customers.

Closing such loopholes will help, but we can’t regulate away all of the risk. It’d be wiser to beef up our defenses by investing in AI-enabled early-detection systems.

Today, the Centers for Disease Control and Prevention’s Traveler-based Genomic Surveillance program partners with airports nationwide to gather and analyze wastewater and nasal swab samples to catch pathogens as they enter our borders. Other systems are in place for tracking particular pathogens within cities and communities. But existing detection systems are likely not equipped for novel agents designed with AI’s help.

The U.S. intelligence community is already investing in AI-powered capabilities to defend against next-generation threats. IARPA’s FELIX program, in partnership with private biotech firms, yielded first-in-class AI that can distinguish genetically engineered threats from naturally-occurring ones, and identify what has been changed and how.,A similar technology could be used for DNA synthesis screening -- with AI, we could employ algorithms that predict how novel combinations of genetic sequences might function.

We have barely begun to tap the potential of AI to detect and protect against biological threats. In the case of a novel infectious disease, these systems have the power to determine how and when a pathogen has mutated. That can enable the speedy development of vaccines and treatments specifically tailored to new variants. AI can also help predict how a pathogen is likely to spread.For these technologies to play their vital role, leaders in Washington and around the world must take steps to build up our AI defenses. The best way to counter “bad AI” isn’t “no AI” -- it’s “good AI.”

Using AI to its full potential to protect against deadly pandemics and biological warfare demands an aggressive policy effort. It’s time for policymakers to adapt. With adequate foresight and resources, we can get ahead of this new class of threats.

Andrew Makridis is the former Chief Operating Officer of the CIA, the number-three position at the agency. Prior to his retirement from the CIA in 2022, he spent nearly four decades working in national security.

Share:
In Other News
Load More