When the U.S. FDA rewrote its current good manufacturing practices (cGMP’s) for drug products back in 1976, it added the requirement that manufacturers review the quality standards for each drug product every year, and that they write up results in an Annual Product Review (APR). After some manufacturers commented on the proposed regulation, objecting to FDA’s initial report requirements, the Agency revised the proposal to allow each manufacturer to establish its own procedures for evaluating product quality standards. They were to base the final report on records required by cGMPs. The final requirement became law in 1979, as 21 CFR 211.180(e) .
Conducted for each commercial product, the APR provides the basis for deciding on steps needed to improve quality. The APR must include all batches of product, whether they were accepted or rejected and/or stability testing performed during the last 12-month period. The APR must cover a one-year period, but does not necessarily have to coincide with the calendar year. A report for the APR addresses the assessment of data, documents and electronic records reviewed.
The data generated from the batch or product are trended using appropriate statistical techniques such as time series plots, control charts and process capability studies. Control limits are established through trending, and specs for both starting materials and finished products are revisited. If any process is found to be out of control, or to have low capability indices, improvement plans and corrective and/or preventive actions must be taken.
Out-of-specification (OOS) regulatory issues have been well understood and documented in the literature . However, out-of-trend (OOT) issues, for product stability, raw materials (RM) and finished products (FP) data identification and investigation are less well understood, but rapidly gaining regulatory interest.
An OOT result in stability, RM or FP is a result that may be within specifications but does not follow the expected trend, either in comparison with historical data of other stability, RM or FP batches respectively, or with respect to previous results collected during a stability study.
The result is not necessarily OOS but does not look like a typical data point. Identifying OOT results is a complicated issue and further research and discussion are helpful.
Regulatory and Business Basis
A review of recent Establishment Inspection Reports (EIRs), FDA Form 483s, and FDA Warning Letters suggests that identifying OOT data is becoming a regulatory issue for marketed products. Several recent recipients of 483’s were asked to develop procedures documenting how OOT data will be identified and investigated.
It is important to distinguish between OOS and OOT results criteria. The FDA issued draft OOS guidance  following a 1993 legal ruling from United States v. Barr Laboratories . Much has since been written and presented on the topic of OOS results.
Though FDA’s draft guidance indicates that much of the guidance presented for OOS can be used to examine OOT results, there is no clearly established legal or regulatory basis for requiring the consideration of data that is within specification but does not follow expected trends.
United States v. Barr Laboratories stated that the history of the product must be considered when evaluating the analytical result and deciding on the disposition of the batch. It seems obvious that trend analysis could predict the likelihood of future OOS results.
Avoiding potential issues with marketed product, as well as potential regulatory issues, is a sufficient basis to apply OOT analysis as a best practice in the industry . The extrapolation of OOT should be limited and scientifically justified, just as the use of extrapolation of analytical data is limited in regulatory guidance (ICH, FDA). The identification of an OOT data point only notes that the observation is atypical.
This article discusses the possible statistical approaches and implementation challenges to the identification of OOT results. It is not a detailed proposal but is meant to start a dialogue on this topic, with the aim of achieving more clarity about how to address the identification of out-of-trend results.
This article will focus on studying the OOT trends in finished products and raw materials only. A different approach would be necessary to identify and control OOT in stability, and will be discussed in subsequent articles.
Differences Between OOS and OOT
Out-of-specification (OOS) is the comparison of one result versus a predetermined specification criterion. OOS investigations focus on determining the truth about that one value while out-of-trend (OOT) is the comparison of many historical data values versus time and OOT investigations focus on understanding non-random changes.For example:
The specification limit of an impurity is not more than 0.10%:
Case 1: For a particular batch, the result obtained is 0.11%. This result is out of the specification limit and is called OOS. An investigation is required. Root cause analysis (RCA) is required for OOS investigation. Once a root cause is identified, corrective and preventive measures need to be taken.
Case 2: The result obtained is 0.08%. Although the result is well within the specifications, we should compare the result with the previous batches’ trend. If we find the average value of the trend as 0.05%, then this batch result (0.08%) is called out-of-trend. Any result greater than 0.05% will be atypical results. A systematic root cause analysis is required. After identifying the root cause, we can decide the fate of the batch. OOT is dealt with on a case-by-case approach. A thorough understanding and control of the process is required.
We used the following tools to analyze data in this paper: