In the search for promising new drug treatments, the pathway from laboratory to pharmacy is typically expensive, time-consuming, and uncertain.
It’s estimated that it can take up to 15 years and require more than $2 billion to get each approved drug out of the gate.
However artificial intelligence (AI), with its unparalleled ability to analyse vast datasets, holds out the promise of a speedier, less arduous drug development process.
Research published in Nature Medicine highlights the multiple benefits that flow from bringing AI to different tasks in drug development.
These tasks include identifying disease biomarkers and potential drug targets, simulating drug–target interactions, predicting the safety and efficacy of drug candidates, and managing clinical trials.
But amid the optimism, there are growing calls for caution.
Current uses
In Australia, biotech giant CSL is using AI to accelerate drug development, aiming to produce more personalised and effective treatments for serious diseases.
Meanwhile, CSIRO’s new Virga supercomputer is also aiming to expedite early drug discovery.
At Moderna, AI is integrated across the drug discovery and development pipeline by leveraging a strong digital foundation built on cloud infrastructure, data integration, automation, and advanced analytics, says Brice Challamel, Moderna’s Head of AI and Product Innovation.
At the earliest stage, such as target identification and mRNA design, AI-driven machine learning models can help optimise the construct of mRNA sequences for efficiency, stability, and protein expression.
“This is crucial because there are billions of possible mRNA designs for any given protein, and AI helps navigate this complexity beyond traditional science alone,” Challamel explains via email.
In later development stages, AI helps with data analysis, brainstorms best next steps, and aids operational efficiency.
It also supports manufacturing and supply chain processes by providing real-time insights and supporting automation design and documentation.
AI can effectively be used even in complex diseases such as cancer.
AI-driven machine learning models can help analyse vast biological data and generate novel hypotheses, which seeks to improve both efficiency and stability of precision medicines.
Brice Challamel
Challamel points to Moderna’s development program for individualized neoantigen therapies (INT) for cancers.
AI algorithms rapidly analyse sequencing data from patients’ tumours, which are unique to each person, like a ‘fingerprint’, along with blood samples to identify mutations and predict neoantigens (mutated proteins that are likely to trigger an immune response).
“This step, which can be time-consuming and complex, is streamlined using an integrated, AI-driven process with expert human oversight,” he says.
“Based on this analysis, our scientists work with the AI to select up to 34 neoantigens and design an mRNA sequence that gives cells instructions to produce these cancer-specific proteins.
“The goal is to train the immune system to recognise these tumour ‘fingerprint’ proteins and mount a targeted immune response against the cancer.”
Caution advised ahead
Yet Challamel argues that the most critical piece in using any form of AI for drug development is ensuring robust human oversight and transparency at every step.
“We operate in a highly regulated industry where decisions impact patient safety, so it’s essential that AI tools are never used in isolation,” he explains.
“Every output is reviewed through structured human ‘expert-in-the-loop’ processes, or employees who are qualified against that particular workflow.”
Meanwhile, cross-functional governance ensures decisions are traceable, explainable, and aligned with regulatory expectations.
“Transparency is non-negotiable as regulators need to understand how AI-derived insights are generated, including the data inputs, assumptions, and review steps behind them,” he adds.
“We document this thoroughly and ensure all decisions involving AI are supported by clear, auditable evidence.”
Others warn that the potential for AI to revolutionise medical discovery may only be fully realised if some important guard rails are put in place.
A study published earlier this year in Fundamental Research, highlighted the importance of data quality, algorithm training, and ethical consideration, particularly in patient data handling during clinical trials.
Data quality demands
There are many risks, cautions and challenges associated with the use of AI.
In high stakes fields like drug discovery, diversity of data is essential to avoid errors and biases.
A 2023 review published in Pharmaceuticals identified that the availability of suitable and sufficient data is essential for the accuracy and reliability of results.
“Scientifically, data quality and integration are foundational because AI is only as good as the data it’s trained on,” says Challamel.
“We’ve invested heavily in digital infrastructure to ensure clean, consistent, and accessible datasets across functions.”
High failure rates
Although AI is helping to fast-track drug development, it has thus far failed to shift the needle when it comes to addressing the 90 per cent failure rate during clinical trials.
Tony Kenna, President of the Australian Society for Medical Research (ASMR), says he is not yet aware of any real benefits from applying AI tools to clinical trials data.
“There are ongoing studies evaluating whether AI models can create digital twins—virtual patient models based on historical public data—to predict disease progression and treatment effects,” he says.
As highlighted in Communications Medicine, this may allow for smaller, more efficient trials with fewer patients in control groups, improving statistical power and reducing trial duration.
Kenna also pointed to the work of QuantHealth which is using AI trained on data from 350 million patients and 700,000 therapeutics to simulate trials.
“I’m not aware of any tangible outcomes from this yet though,” he adds.
Shane Huntington OAM, CEO of ASMR, says the drug discovery pipeline often results in large numbers of pharmaceuticals which have efficacy below a critical threshold.
“For many people, these drugs work beautifully,” he says.
“The problem is to determine who they work for prior to use.”
Genetic tests for a relatively small number of drugs are currently available – and though costly, they can provide crucial information on use.
“Given the enormous amount of money already invested in drugs that are ‘sitting on the shelf’ – genetic assessment, perhaps supported by AI, should be a focus,” Huntington adds.
Blind spots
AI tools do tend to have blind spots, which often spring from limitations in data quality, data availability, and the complexity of biological systems, says Kenna.
“Not all research papers are created equal,” he explains.
“A trained scientist can assess critical elements of quality in a study such as sample quality, cohort selection, and appropriateness of statistical methods, to determine the robustness of the published findings.
“AI tools are poor at discriminating good from bad science so the models can include both robust and poor-quality data which will likely impact the strength of the application of the AI tools.”
Kenna adds that negative data (from failed experiments) is underreported, which is critical for training robust models.
Misuse
Awareness is growing that AI tools used in drug design can become dangerous in the absence of ethical and legal frameworks.
In 2022, scientists Sean Ekins and Fabio Urbina wrote about their ‘Dr Evil’ project in Nature Machine Intelligence.
They demonstrated how an algorithm designed to identify therapeutic compounds could be turned on its head to create chemical weapons.
According to Huntington, the misuse of AI in other, less regulated industries is already causing major issues.
He refers to the major lawsuit recently lodged by several of the big motion picture companies, over copyright infringement.
“There will be similar issues around IP if AI systems are not carefully restricted in terms of what data they have access to – in some regards stripping them of their greatest advantage,” he says.
Overconfidence
Machine learning tools often fail to quantify uncertainty – leading to bold, but misleading, predictions.
An editorial published in Nature in 2023 stated that systems based on generative AI which used patterns leant from training data to generate new data with similar characteristics could be problematic.
It noted how the chatbot ChatGPT sometimes fabricated answers.
“In drug discovery, the equivalent problem leads it to suggest substances that are impossible to make,” it states.
Kenna says that risks from hallucinations and AI errors can be mitigated by keeping ‘humans in the loop’ and ensuring expert review before decision making.
“AI tools should be helping experts not replacing them,” he says.
Security and privacy
Challamel also notes that security of data is key.
“We’ve complemented public AI tools with secure, internal enterprise solutions which keep sensitive data isolated and protected,” he says.
“In short, AI can be a powerful accelerator, but only when paired with rigorous human oversight, transparency, and compliance.”
