Drug Stability Testing and Analytics

July 17, 2017
New challenges signify the need for a more unified biopharmaceutical industry

With a standard set of taxonomies and ontologies for equipment, measurements and analysis, there is an opportunity to streamline the way companies manage test execution, samples and results so that they can predict stability issues early.

Consumers may take it for granted that a therapeutic drug product in the household medicine cabinet effectively does what its label claims. Likewise, medical professionals expect prescribed products to comply with efficacy, safety and stability requirements. The confidence of both parties is based on the expectation that biopharmaceutical companies thoroughly test therapeutic products long before they reach the market.

Considering stability, biopharmaceutical manufacturers need to know that as a product sits on a shelf, it won’t lose its effectiveness or degrade into something dangerous to consume. For finished product stability testing, a product is stored in controlled conditions and then periodically removed from those conditions and examined to ensure it adheres to standards. Various characteristics are measured to create a degradation line over the shelf life of a product. In some cases, a study may span many years.

For a substance in development and not yet approved for clinical use, companies cannot wait years due to competitive pressures, market demands and the clock ticking on patent expiration dates. To accelerate the stability testing process, scientists put the substance in stress conditions such as high humidity and temperature to highlight stability issues sooner.

If they have standardized technology, practices and data to support an integrated approach to stability testing and analytics, they may also use virtual models. Virtual testing, also known as in silico testing, can often predict stability issues early in the research and discovery phases.

BIOLOGICS INTRODUCE NEW CHALLENGES
The laboratory processes involved in testing small molecules that comprise pills and tablets are well established. Biologics, however, bring unique challenges, driving a need to reexamine methods, protocols and systems for stability testing. For biologics, tablet-based delivery is not yet a commercially viable option. Instead, the most common method of delivery is via subcutaneous bolus injection. Measuring, predicting and managing stability is very different for an injectable than for a hard tablet or ingestible liquid.

To create an injectable dose that ensures good patient compliance, particularly if the goal is a self-administered auto-injector, drug developers need to ensure that the drug can be administered as painlessly as possible. They must put the biologic into a very small volume of liquid and be able to inject the drug via a very narrow-bore needle. These delivery requirements must still ensure that efficacy is retained. As a result, a very high concentration is required (typically ca. 150 mg/ml), which introduces unique stability issues related to aggregation, viscosity and thermal volatility.

Protein aggregation is becoming increasingly better understood. For a variety of reasons including environmental stress, a protein can irreversibly bind to itself in solution, causing what effectively looks like a snow globe in a vial or syringe — a phenomenon that chemists informally refer to as “crashing out.” In its aggregated form, it may no longer be efficacious and it can be immunogenic.

Another issue related to storing and administering a biologic at high concentration is viscosity. A highly viscous biologic may be impossible to push through a desired narrow-bore syringe needle. To mitigate viscosity issues, it might become necessary to reduce the concentration of the drug, leading to a larger volume. This can impact the market delivery form of the drug. In a worst-case scenario, patients could find themselves staring down the needle of a large-volume syringe that one might expect to be used on a horse. Patient compliance could become a major issue.

Biologics can exhibit narrow thermal stability profiles, meaning they can easily become unstable and denature (lose their biologic activity) unless they are maintained at a specific temperature range. In extreme cases, a biologic may have a short lifespan, even when stored in a refrigerator. This would make it extremely difficult to get enough of the medicine to hospitals, clinics and the wider patient population. Biopharmaceutical companies want a highly stable biologic that can be stored for long periods of time in ambient conditions so viable quantities of the drug can be delivered and stored by medical professionals and patients.

Comprehensive testing is essential for determining these stability issues. Biologics present additional challenges in physical testing phases because the material to be studied is not always readily available. It takes time to generate sufficient material for the required trials. Small molecule drugs are synthesizable by chemists, and there is typically a cost-effective and efficient chemistry route to scale them up. Biologics in comparison co-opt a biological pathway. Scientists must use cell lines to manufacture a biologic and culture it over time in order to generate enough of the biologic for testing and ultimately for production.

IMPLICATIONS OF OUTDATED METHODS
The biologics business is becoming the mainstay of the biopharmaceutical industry, by some estimates representing as many as half of all new drugs now coming to market. A single biologic can account for $5 to 15 billion per year in sales. Because a company can spend up to three months managing a stability issue in formulation stages, mitigation efforts can represent $1 to 4 billion in lost sales in a drug’s first year on the market. That doesn’t include those that don’t ever make it to market because stability issues could not be resolved. A significant portion of a company’s product pipeline can be jeopardized by stability issues.

For many pharma companies today, the goal is to attain a better understanding of recurring stability problems in formulation stages to reduce their frequency. Drug development gets increasingly costly as a substance progresses downstream through various phases of production, so companies wisely strive to detect and mitigate problems upstream as early as possible. When a biologic comes into formulation, scientists should know what issues are likely to arise and should be prepared for them. Or even better, they can design the problem out of the biologic before it reaches the formulation stage.

As an industry, we are getting better at predicting the stability of substances in development. But impediments remain, often related to aging technology and outdated custom practices. Over years, companies typically add components to their technology stack from a variety of sources to manage all kinds of processes. This can lead to mismatched systems that function capably for specific tasks but can’t communicate with each other. Data becomes isolated and paper processes fill the gaps, further segregating information.

These problems escalate in the absence of standards. Unfortunately, non-standard custom practices may persist because of a type of corporate folklore: “We’ve always done it this way. We don’t want to change methods and risk falling out of compliance.”

There’s a lot happening today to challenge that attitude. As always, there are cost pressures along the top line. Externalization also drives the need for standardization. Beyond that, batch sizes are shrinking. It is becoming widely accepted that the path forward is personalized medicine. Smaller quantities now go to a more targeted population. As batch sizes decrease, companies require a different economy of scale encompassing a larger number of drugs in smaller batches. This is only attainable through standardization and integration.

GROWING NEED FOR STANDARDIZATION
In a large company, numerous stability studies with hundreds or thousands of samples may be under analysis at any one time. It is almost impossible to manage that effectively with manual paper-based methods, customized processes, bespoke terminologies and data spread across disparate information systems. An integrated and scientifically aware information platform can manage multiple stability studies automatically. It also creates a foundation for analytical models that help scientists make accurate predictions.

To effectively support predictive analytics, data must be complete, consistent, correct and contextualized by metadata. And it must be the original data as it was captured. The basis for reliable data is standardized processes supported by standardized infrastructure. When standardized processes are in place, scientists can make better assumptions in analytical models, leading to more successful outcomes.

Standardization should encompass technology, methods, protocols, data formats and vocabulary. It should span all departments within a company and extend to partners such as contract research organizations (CROs). All of this must be integrated into a holistic information framework that supports digital continuity.

Digital continuity ensures that information is available up and down the discovery and development value chain so everyone who is authorized has access to data. Organizations must establish proper corporate standards around data management, and they must enforce the standards internally and with partners, so that whatever the source, all information ultimately becomes accessible within the corporate knowledge base.

Ideally, standardization should extend throughout the entire biopharmaceutical industry sector. There is a need for common ontology, vocabulary and standards for mitigating mutual challenges that all companies face.

BUILDING COLLECTIVE KNOWLEDGE
Standardized testing can be performed at a much lower cost than testing based on custom or proprietary methods. There is an opportunity to standardize the taxonomies and ontologies that define how we interface with equipment, how we collect measurements from equipment and the unit operations that we use to build analytical test methods. Unit operations such as weights and measurements are pre-competitive and should be part of universal protocols. If the biopharmaceutical industry can standardize how stability-testing methods are defined and executed and how equipment is connected, laboratories will become more efficient and everyone will benefit. Externalization will be simplified as testing is commoditized.

Guidelines published by the U.S. Food and Drug Administration (FDA) and other organizations disclose exactly how to manage stability studies and implement standards. The guidelines are not vendor-specific or product-specific. The U.S. Pharmacopeia and The National Formulary (USP–NF) define public pharmacopeial standards for chemical and biological drug substances. The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) shares guidelines for stability testing of new drug substances and products. The guidelines are easy to understand and adhere to, but they are noncompulsory and have not been fully adopted yet. Companies that eschew such guidance and cling to outmoded custom processes may be slowing the overall progress of the biopharma sector.

Stability testing is a common challenge for all biopharmaceutical manufacturers. Because of this, many pharma companies see the value of working together in a pre-competitive way. They realize they each have a piece of the puzzle, but they don’t have enough of it to solve the problems. Secure third-party platforms make it possible to collect and centralize data from participating organizations with a view to creating analytical models that can help predict drug stability issues early in development. Data can be protected and anonymized so companies only have access to data they provide while benefitting from the collective knowledge of all contributors.

By cooperating, companies contribute to a shared knowledge base and help to define standards, common vocabulary and common data formats to improve processes. This helps the entire industry and the patients served. Companies can work together and each still maintain their competitive standing with their expertise in specific disease areas. Initiatives such as the Pistoia Alliance and Allotrope Foundation are successful precedents for this type of cooperation among life science organizations. In these projects, experts from life science companies and other groups come together to share pre-competitive strategies for establishing a common data format and ontology for the industry.

THE VALUE OF VIRTUAL + REAL EXPERIMENTATION
Organizations are starting to realize that if they depend exclusively on physical testing, they are likely taking longer than necessary. While stability testing must be executed in a wet lab, companies can do much to reduce the risk of downstream failures by performing early-stage virtual testing to drive development of better formulations that will pass wet lab stability tests.

The trend among leading companies is to leverage their wealth of experimental data to guide early-stage discovery using statistical models, predictive analytics and virtual testing. Predictive science enables researchers to accelerate the innovation cycle by learning from past trends and patterns in stability data, enabling a design approach that mitigates potential stability issues in the design stage of discovery.

The ability to mine data to look for trends and patterns and derive potential knowledge depends on data access and data integrity. Without good data management and integrated systems, it is difficult to make these kinds of analyses. The goal should be to reduce the number of physical wet lab tests required to confirm drug stability. For this, scientists need to know which experiments to perform at any given time. Companies can leverage existing knowledge to create models that help predict which tests are likely to be productive.

The paradigm of combining virtual in silico studies with real-world lab experiments (virtual + real) is gaining wider acceptance in the laboratories of pharma companies. The approach is made possible by having systems, protocols and data standards in place that support both virtual and wet lab testing. Leaders at biopharma organizations should establish a company culture that values both approaches, creating holistic workflows that integrate both types of experimentation.

WORKING TOWARD A UNIFIED INDUSTRY
To optimize outcomes for the patient population, we need a unified biopharmaceutical industry. The foundation for that is unified organizations with unified laboratories. Companies need integrated information systems that support end-to-end knowledge through standardized data and processes. With a standard set of taxonomies and ontologies for equipment, measurements and analysis, there is an opportunity to streamline the way companies manage test execution, samples and results so that they can predict stability issues early.

Companies that implement a standards-based, holistic approach to enterprise information management can improve operational excellence and contribute to industry-wide collective knowledge that advances scientific understanding. In a unified industry, everyone should “speak the same language” across the discovery and development pipeline — including partners, research organizations and competitors. This will accelerate the process of developing therapeutics that are stable, safe and effective, speeding science to compliance and shortening time to market for patients.

About the Author

Adrian Stevens and Gene Tetreault | Dassault Systèmes BIOVIA