The cutting-edge number-crunching capabilities of artificial intelligence mean that AI systems are able to spot diseases early, manage chemical reactions, and explain some of the mysteries of the Universe.

But there's a downside to this incredible and virtually limitless artificial brainpower.

New research emphasizes how easily AI models can be trained for malicious purposes as well as good, specifically in this case to imagine the designs for hypothetical bioweapon agents. A trial run with an existing AI identified 40,000 such bioweapon chemicals in the space of only six hours.

In other words, while AI can be incredibly powerful – and much, much faster than humans – when it comes to spotting chemical combinations and drug compounds to improve our health, the same power can be used to dream up potentially very dangerous and deadly substances.

"We have spent decades using computers and AI to improve human health – not to degrade it," the researchers write in a new commentary.

"We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life."

The team ran the trial at an international security conference, putting an AI system called MegaSyn to work – not in its normal mode of operation, which is to detect toxicity in molecules in order to avoid them, but to do the opposite.

In the experiment, the toxic molecules were kept rather than eliminated. What's more, the model was also trained to put these molecules together in combinations – which is how so many hypothetical bioweapons were generated in so short a time.

In particular, the researchers trained the AI with molecules in databases of drug-like molecules, instructing that they'd like something similar to the potent nerve agent VX.

As it turned out, a lot of the generated compounds were even more toxic than VX. As a result, the authors behind the new study are keeping some of the details of their research secret, and have seriously debated whether or not to make these results public at all.

"By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules," the researchers explain.

In an interview with The Verge, Fabio Urbina – lead author of the new paper and senior scientist at Collaborations Pharmaceuticals, where the research took place – explained that it doesn't take much to "flip the switch" from good AI to bad AI.

While none of the listed bioweapons were actually explored or put together in a lab, the researchers say their experiment serves as a warning of the dangers of artificial intelligence – and it's a warning that humanity would do well to heed.

While some expertise is required to do what the team did here, a lot of the process is relatively straightforward and uses publicly available tools.

The researchers are now calling for a "fresh look" at how artificial intelligence systems can potentially be used for malevolent purposes. They say greater awareness, stronger guidelines, and tighter regulation in the research community could help us to avoid the perils of where these AI capabilities might otherwise lead.

"Our proof of concept thus highlights how a non-human autonomous creator of a deadly chemical weapon is entirely feasible," the researchers explain.

"Without being overly alarmist, this should serve as a wake-up call for our colleagues in the 'AI in drug discovery' community."

The research has been published in Nature Machine Intelligence.