AI moves from possibility to practice in drug discovery value chain

As artificial intelligence tools proliferate, the real challenge is not generating ideas but turning them into chemically feasible and manufacturable paths.
Feb. 17, 2026
9 min read

Long before the current artificial intelligence boom, AI-enabled technologies were already embedded in modern drug discovery and development. Computational modeling, automation, and in silico tools have long supported research and development workflows. Despite these advances, however, approximately 90% of clinical drug candidates still fail.

While AI is now deeply entrenched across drug discovery and development, its most meaningful impact is only beginning to emerge. The industry’s challenge is no longer whether these technologies can generate insights, but whether those insights can be translated into earlier, better decisions across discovery, formulation, and manufacturing. As pharmaceutical companies push AI deeper into the value chain, the focus is shifting from experimentation to feasibility, integration, and execution.

Major players across the life sciences are actively working to integrate AI in ways that improve not just individual steps, but the end-to-end discovery and development process.

Early discovery, chemical feasibility

When it comes to generating ideas, one of the biggest challenges is determining which ones are worth pursuing.

AI models can now propose vast numbers of candidate molecules but translating those outputs into chemically feasible and manufacturable paths remains a persistent bottleneck. A growing focus in the space is on reshaping how molecular information is represented so computational insights align more closely with the realities of chemistry. 

According to Michael Foley, co-founder and CEO of Excelsior Sciences, the challenge is not simply the volume of data being generated, but the difficulty of turning that information into actionable decisions. Excelsior’s work reflects a broader industry effort to make chemistry more machine-interpretable, rather than forcing AI systems to adapt to legacy workflows.

In the area of small molecule development, Foley notes that AI models have dramatically expanded the universe of potential starting points by proposing billions of candidate molecules for a single drug discovery program. However, increased possibility does not automatically translate into better outcomes. Even when AI generates promising structures, human judgment remains essential.

“Let’s say we’ve got 100 billion molecules — somebody’s got to filter through all of them,” Foley says, and ultimately raised questions about knowing when to decide whether those molecules can be made or whether they are even worth making.

That bottleneck, according to Foley, reflects a deeper mismatch between how modern AI tools operate and how chemistry is still practiced. While algorithms can rapidly generate ideas, downstream decision-making remains largely manual.

Many AI-driven discovery efforts will ultimately funnel candidates back into workflows that have existed for decades, Foley pointed out. Medicinal chemists still rely heavily on experience and intuition to determine which compounds should be synthesized. From there, processes often remain unchanged: compounds are sent to external vendors for synthesis, returned weeks or months later for testing, replated, analyzed, and reviewed.

Foley contends that AI tools also struggle to answer some of the most critical questions early enough, including whether a compound is likely to exhibit acceptable absorption, distribution, metabolism, and excretion (ADME) as well as toxicology properties. Consequently, significant time and resources are often spent advancing candidates that fail later in development. Compounding these issues is the inherently artisanal nature of chemical synthesis itself. The diversity of starting materials, reactions, and conditions makes automation difficult, limiting how efficiently AI-generated insights can be executed in the lab.

“I don’t think we can make the molecules and test them fast enough to really allow the algorithms to learn from,” says Foley. “They’re very data hungry and they’re not good at waiting.”

To address this challenge, Excelsior Sciences has developed an approach to chemistry designed to fit into automated processes. Central to this are what the company calls its smart bloccs, which are “automated synthesis-friendly chemical building blocks that enable iterative carbon–carbon bond formation,” according to the company. These smart bloccs function as a modular chemical “language,” allowing AI systems to guide discovery within closed-loop learning environments.

By tokenizing chemistry at the building-block level, Excelsior says it enables machine learning models to identify patterns tied directly to drug-relevant properties such as ADME behavior and toxicity, areas where traditional AI-driven discovery tools have struggled. Foley compares this approach to AlphaFold, the AI system developed by Google DeepMind that predicts a protein’s three-dimensional structure from its amino acid sequence, noting that its success came from representing biological complexity in a structured, machine-readable way.

Recommended Listening

AI as a scientific collaborator

As AI-driven innovation accelerates, developing the infrastructure and organizational frameworks to support it has become increasingly important throughout the life sciences.

At Eli Lilly, the challenge is not only managing the scale of chemical space but enabling scientists to explore it in ways that meaningfully inform discovery decisions. Thomas Fuchs, chief AI officer at Lilly, notes that the number of potential drug-like combinations in small molecule chemistry vastly exceeds what humans can reasonably evaluate on their own.

“We could never actually sift through that, even if all humans did that during their lifetime,” Fuchs says. “But with AI, we can do that at a much larger scale and explore spaces that you couldn’t explore before.”

To operationalize that vision, Lilly in October 2025 announced the deployment of a new AI Factory powered by an AI supercomputer, leveraging next-generation NVIDIA architectures. The system is designed to support large-scale model training and accelerate discovery efforts across genomics, molecular design, and personalized medicine. The platform features NVIDIA NeMo software, which allows Lilly to create AI agents capable of reasoning, planning, and acting across both digital and physical laboratory environments, supporting molecule generation, in silico evaluation, and in vitro testing.

“The heavy lifting is done by very dedicated, very large machine learning models that are based on decades of data, not only the successful ones, but also all the failed experiments,” Fuchs says. “That’s our secret sauce. Lilly is nearly a 150-year-old company, so we have millions of compounds where we know they do not work.”

Building computational power alone, however, is not enough. Lilly has also emphasized close integration between AI developers and domain experts.

The company recently expanded on its infrastructure investments by announcing the development of a co-innovation lab that will co-locate Lilly experts in biology, chemistry, and medicine with AI engineers from NVIDIA. Working together, the teams aim to generate large-scale experimental data and build models that continuously improve as new results are produced.

The collaboration is focused on creating a continuous learning system that tightly connects agentic wet labs with computational dry labs, enabling 24/7 AI-assisted experimentation. In this “scientist-in-the-loop” framework, experiments, data generation, and AI model development inform one another continuously rather than occurring in discrete stages — supporting Lilly’s view of AI as a scientific collaborator, rather than an autonomous discovery engine.

“For example, we had quite some success with a small molecule that we designed from scratch internally with one of our foundation models. And that was a true collaboration between the AI model and the chemists. And that model came up with a new motif,” Fuchs notes. “Part of the small molecule was a new motif the chemists hadn’t considered; a new fragment. Its bits of fragments we know are synthesizable, and that fragment was — actually — never used and it significantly increased the properties of that molecule. And now, what the chemists are doing is they’re taking that motif and using it for other molecules and targets to see if it helps in other areas.”

This approach reflects a broader shift across large pharmaceutical organizations. Rather than expecting AI to deliver finished answers, companies are increasingly using it to widen the search space, surface unconventional options, and help scientists make better-informed decisions earlier — while keeping human expertise firmly in control of what moves forward.

Further Reading

chemify_lab
Chemify’s blend of AI, robotics, and programmable chemistry may set the stage for faster and more reproducible drug development.
Dec. 16, 2025

How AI can influence drug formulation

As drug candidates progress further along the value chain, the nature of the challenges changes. Questions of novelty and target engagement give way to considerations around manufacturability, stability, and formulation.

In formulation and process development, AI’s value is often less about prediction in isolation and more about narrowing options early, helping teams avoid development paths that are unlikely to scale or perform as intended.

“Formulation has typically required lengthy experiments because early information can be sparse, inconsistent, or siloed,” says Anil Kane, global head of technical and scientific affairs, Pharma Services at Thermo Fisher Scientific. “AI is changing the ability to bring those data points together so that scientists make better decisions, from the very first one.”

Predictive modeling is increasingly being used to anticipate excipient compatibility, degradation risk, and downstream performance, notes Kane. By forecasting how different materials may affect solubility, stability, and bioavailability, these tools can help accelerate development timelines and reduce the likelihood of costly reformulation later in the process.

With this perspective, Thermo Fisher has focused on applying predictive tools earlier in oral solid dose (OSD) formulation workflows. The company’s OSD Predict platform integrates historical formulation data with machine learning models to help scientists evaluate formulation strategies before committing to extensive laboratory testing.

This approach reflects a broader shift toward using AI to inform formulation decisions upstream, rather than relying solely on downstream trial-and-error experimentation.

“When models are trained with high-quality data and applied thoughtfully, they help teams compare options early and focus effort where it matters most,” Kane says. “That ultimately accelerates development timelines and helps ensure safe, effective therapies reach patients faster.”

About the Author

Andy Lundin

Senior Editor

Andy Lundin has more than 10 years of experience in business-to-business publishing producing digital content for audiences in the medical and automotive industries, among others. He currently works as Senior Editor for Pharma Manufacturing and is responsible for feature writing and production of the podcast.

His prior publications include MEDQOR, a real-time healthcare business intelligence platform, and Bobit Business Media. Andy graduated from California State University-Fullerton in 2014 with a B.A. in journalism. He lives in Long Beach, California.

Sign up for our eNewsletters
Get the latest news and updates