Features

Weekend Reads: Discovering New Drugs With AI

Editor

by Kevin Schofield

This weekend's read is an essay looking at an often overlooked aspect of the tools that pharmaceutical researchers use to discover potential new drugs.

Artificial intelligence, or AI, is being incorporated into more and more business processes. You can roughly think of AI, at least in its current form, as software that is able to take an existing body of data on a particular topic, learn from it, and then extrapolate to new scenarios. Chess-playing computers are trained using past games. Computer-vision systems are fed photos that are tagged with the items in the photos, and the systems learn to recognize other examples of those items.

In the drug-discovery world, AI systems train on examples of molecules that have been found to have therapeutic value, as well as ones that are toxic. The tools then extrapolate new potential molecules that exhibit the aspects of the therapeutic ones while avoiding the toxic aspects. Those candidate drugs can then be further examined, synthesized, and tested.

A group of researchers at Collaborations Pharmaceutical in North Carolina were asked by the Swiss Federal Institute for Nuclear, Biological, and Chemical Protection to present at the institute's biannual conference on "how AI technologies for drug discovery could potentially be misused." While not a topic they had spent much time considering, in preparation for the presentation they took their own "molecule generator" AI software and reversed the parameters: They asked it to optimize for toxicity instead of therapeutic value.

In less than six hours, the software had designed 40,000 potentially toxic molecules. Among those candidates were the nerve agent VX, one of the most toxic chemical warfare agents of the 20th century, and several other known chemical warfare agents. It generated them out of thin air; none of those agents were in the dataset used to train the AI.

"Without being overly alarmist, this should serve as a wake-up call for our colleagues in the 'AI in drug discovery' community," the researchers wrote. Their molecule generator was based on readily available open-source tools in wide use in the pharma community; while these researchers claim they destroyed the results of their thought experiment, it would be easy for bad actors to use the same tools and public toxicity datasets to repeat their experiment and design their own novel, toxic chemicals.

As the researchers point out, much of our public debate on the potential misuse of AI relates to privacy, discrimination, and safety, but not to national and international security issues such as developing new chemical and biological weapons. But the hard part here is that the tools are the same, regardless of whether they are being used to generate new life-saving drugs or new chemical warfare agents. How do you police the use of these tools so that you have one without the other?

In many ways, though, this is a constantly revisited theme for technology advances. Microsoft Office and Google Docs can be used to write both hate-filled screeds and hate-crime legislation. Accounting software powers the scrappy start-up down the street as well as crime syndicates. AI-powered computer vision software can search video for wanted criminals and can surveil innocent citizens. 3D printers can make useful tools, and also unregistered, untraceable, often undetectable weapons.

The most common defense we hear from the tech industry is that technology is "amoral"; the technology itself is neither good nor evil, while the uses of that technology — and the people who use them that way — are what we need to police. Easier said than done, however; in most cases where the technology is freely available, we don't discover misuses until after the damage is done.

The researchers suggest a few ideas for how to curtail some of the worst potential outcomes of the dual-nature of AI-based tools. One is to step up the development of ethical guidelines for these emerging areas of concern, followed by enhanced training for professionals on where the ethical boundaries lie. Another is to put the AI software and accompanying data models "in the cloud" where access can be regulated, rather than allowing them to be downloaded to private machines where unknown actors can use them for malevolent purposes.

But in truth, as the researchers put it, "the genie is out of the medicine bottle." The tools that are already out there are enough to help someone with evil intent cause great harm. Add this to the list of challenges in a technologically advancing society: how to empower people to do good while preventing them from doing evil.

Kevin Schofield is a freelance writer and publishes Seattle Paper Trail. Previously he worked for Microsoft, published Seattle City Council Insight, co-hosted the "Seattle News, Views and Brews" podcast, and raised two daughters as a single dad. He serves on the Board of Directors of Woodland Park Zoo, where he also volunteers.

Featured Image: Image by PopTika/Shutterstock.com

Before you move on to the next story …

The South Seattle Emerald™ is brought to you by Rainmakers. Rainmakers give recurring gifts at any amount. With around 1,000 Rainmakers, the Emerald™ is truly community-driven local media. Help us keep BIPOC-led media free and accessible.

If just half of our readers signed up to give $6 a month, we wouldn't have to fundraise for the rest of the year. Small amounts make a difference.

We cannot do this work without you. Become a Rainmaker today!