There exists a market need, dictated by the FDA, to maintain the accuracy of sensors in a validated pharmaceutical process. Today, this is achieved by: 1) installing certified instruments; and 2) maintaining a costly routine calibration protocol.
FDAs process analytical technology (PAT) initiative has opened the door to a fresh look at applying technology for productivity improvements in the pharmaceutical industry. The application of online, real-time analytical instruments was the first trend of the PAT initiative. This paper addresses another aspect for cGMP data integrity. It takes a novel approach to maintaining data integrity through the use of redundancy and statistical analysis. The result is reduced calibration cost, increased data integrity and reduced off-spec uncertainty.
Today, pharmaceutical companies write elaborate calibration protocols that are consistent (and sometimes overly compliant) with FDA cGMP guidelines to maintain the reported process value integrity. This can result in extremely high cost for compliance with only a minimum ROI for improved productivity or product quality. For example, one pharmaceutical site in New Jersey conducts about 2,900 calibrations per month. Of those, about 500 are demand maintenance where the instrument has clearly failed as evidenced by a lack of signal or a digital diagnostic (catastrophic failures). The remaining 2,400 calibrations are scheduled per protocol. Of these, only about 400 calibrations find the instrument out of calibration. The majority, about 2,000 calibrations per month, find the instrument still working properly. Those at other pharmaceutical manufacturing facilities can check orders from the metrology department and obtain the exact ratio for their facility, and might be surprised to find similar numbers.
This paper describes an alternate instrument scheme consisting of the use of redundant sensors and statistical analysis to avoid unnecessary calibrations and to detect sensors that are starting to drift before they go out of calibration.
The new approach is:
- To install two dissimilar instruments to sense the critical (cGMP) value
- To track their relative consistency via a statistical control chart
- Upon detection of the two values drifting apart, to determine which instrument is drifting as a function of the relative change in the individual instruments change in standard deviation.
- To use the process alarm management system to alarm the operator that
a. the sensors are drifting apart
b. most likely, the faulty instrument is the one with the changing standard deviation
If there are no alarms:
- Both instruments are tracking
- The operator and control programs can assume there is high data integrity
- There is no need for routine calibration.
The economic justifications of this approach are:
- Hard savings: Cost of second instrument versus periodic calibrations
- Soft savings: Cost of auditing product quality for everything that was affected by the failed instrument since its last calibration
Figure 1: There is a need and hidden cost to evaluate all product and performance that may have been affected by the undetected failure of a cGMP instrument.
In light of the high frequency and high cost of performing calibrations in a validated environment and the downside risk and cost of quality issues, the potential savings can be huge. Therefore, the life cycle cost can warrant the increased initial investment in a second instrument and the real-time statistical analysis of the instrument pair.
Let us begin by establishing a base level of understanding of instrumentation calibration.
Precise, dependable process values are vital to an optimum control scheme and, in some cases, they are mandated by compliance regulation. Precision starts with the selection and installation of the analog sensor while the integrity of the reported process value is maintained by routine calibration throughout the life of the instrument.
When you specify a general purpose instrument, it has a stated accuracyfor example, +/- 1% of actual reading. In the fine print, that means that the vendor states that the reading of the instrument will be within 1% of reality 95% of the time (certainty).
For example, if a speedometer indicates that you are traveling at 55 mph and the automobile manufacturer installed a +/- 1% speedometer, then you do not know exactly how fast you are going but there is a 95% probability that it is somewhere between 54.45 and 55.55 mph. See Figure 2.
Figure 2: Accuracy of a speedometer
There are two reasons why this situation is acceptable when 5% of the time the instrument is probably reporting a value that is more than 1% inaccurate:
- Cost / value tradeoff: The inaccuracy will not effect production or quality
- The next reading has a 95% chance of being +/- 1% of reality, therefore placing it within specs
If you need to improve the accuracy of the values, you can specify:
Once installed, periodically re-calibrating the instrument based on drift specification provided by the instrument vendor, owner/operator philosophy or industry guideline GMP will assure the integrity of the value. Although periodic calibration is the conventional solution, the 2,400 scheduled calibrations referenced above present two economic hardships:
- The 2,000 calibrations that simply verify that the instruments are still operating within specifications are pure non-ROI cost
- The 400 that are out of spec create an even more troublesome problem. If the instruments process value is critical enough to be a validated instrument that requires periodic calibration, then what happens when it is discovered to be out of calibration? By protocol, must a review of all products that have been manufactured since the last known good calibration occur? Probably yes, because if the answer is no it begs the question as to why this instrument was considered a validated instrument. If the instrument is only slightly out of calibration but still within the product/process requirements, the review may be trivial. If it is seriously out of calibration, a comprehensive quality audit or product recall may be mandated by protocol.
Unavailability vs. Integrity