Reduce Calibration Costs and Improve Sensor Integrity through Redundancy and Statistical Analysis

Redundant sensors and statistical analysis can lead to reduced calibration cost, increased data integrity and reduced off-spec uncertainty.

By Dan Collins, Manager of DCS Solution Partner Program, Siemens Energy & Automation

1 of 4 < 1 | 2 | 3 | 4 View on one page

There exists a market need, dictated by the FDA, to maintain the accuracy of sensors in a validated pharmaceutical process. Today, this is achieved by: 1) installing “certified” instruments; and 2) maintaining a costly routine calibration protocol.

FDA’s process analytical technology (PAT) initiative has opened the door to a fresh look at applying technology for productivity improvements in the pharmaceutical industry.  The application of online, real-time analytical instruments was the first trend of the PAT initiative. This paper addresses another aspect for cGMP – data integrity. It takes a novel approach to maintaining data integrity through the use of redundancy and statistical analysis. The result is reduced calibration cost, increased data integrity and reduced off-spec uncertainty.

Today, pharmaceutical companies write elaborate calibration protocols that are consistent (and sometimes overly compliant) with FDA cGMP guidelines to maintain the reported process value integrity. This can result in extremely high cost for compliance with only a minimum ROI for improved productivity or product quality. For example, one pharmaceutical site in New Jersey conducts about 2,900 calibrations per month. Of those, about 500 are demand maintenance where the instrument has clearly failed as evidenced by a lack of signal or a digital diagnostic (catastrophic failures). The remaining 2,400 calibrations are scheduled per protocol. Of these, only about 400 calibrations find the instrument out of calibration. The majority, about 2,000 calibrations per month, find the instrument still working properly. Those at other pharmaceutical manufacturing facilities can check orders from the metrology department and obtain the exact ratio for their facility, and might be surprised to find similar numbers.

This paper describes an alternate instrument scheme consisting of the use of redundant sensors and statistical analysis to avoid unnecessary calibrations and to detect sensors that are starting to drift before they go out of calibration.

The new approach is:

  1. To install two dissimilar instruments to sense the critical (cGMP) value
  2. To track their relative consistency via a statistical control chart
  3. Upon detection of the two values drifting apart, to determine which instrument is drifting as a function of the relative change in the individual instruments change in standard deviation.
  4. To use the process alarm management system to alarm the operator that
    a. the sensors are drifting apart
    b. most likely, the faulty instrument is the one with the changing standard deviation

If there are no alarms:

  1. Both instruments are tracking
  2. The operator and control programs can assume there is high data integrity
  3. There is no need for routine calibration.

The economic justifications of this approach are:

  1. Hard savings: Cost of second instrument versus periodic calibrations
  2. Soft savings: Cost of auditing product quality for everything that was affected by the failed instrument since its last calibration

Figure  1:  There is a need and hidden cost to evaluate all product and performance that may have been affected by the undetected failure of a cGMP instrument.

In light of the high frequency and high cost of performing calibrations in a validated environment and the downside risk and cost of quality issues, the potential savings can be huge. Therefore, the life cycle cost can warrant the increased initial investment in a second instrument and the real-time statistical analysis of the instrument pair.

Calibration Basics

Let us begin by establishing a base level of understanding of instrumentation calibration.

Precise, dependable process values are vital to an optimum control scheme and, in some cases, they are mandated by compliance regulation. Precision starts with the selection and installation of the analog sensor while the integrity of the reported process value is maintained by routine calibration throughout the life of the instrument.

When you specify a general purpose instrument, it has a stated accuracy—for example, +/- 1% of actual reading. In the fine print, that means that the vendor states that the reading of the instrument will be within 1% of reality 95% of the time (certainty).

For example, if a speedometer indicates that you are traveling at 55 mph and the automobile manufacturer installed a +/- 1% speedometer, then you do not know exactly how fast you are going but there is a 95% probability that it is somewhere between 54.45 and 55.55 mph. See Figure 2.

Figure 2:  Accuracy of a speedometer

There are two reasons why this situation is acceptable when 5% of the time the instrument is probably reporting a value that is more than 1% inaccurate:

  1. Cost / value tradeoff: The inaccuracy will not effect production or quality
  2. The next reading has a 95% chance of being +/- 1% of reality, therefore placing it within specs

If you need to improve the accuracy of the values, you can specify:

Once installed, periodically re-calibrating the instrument based on drift specification provided by the instrument vendor, owner/operator philosophy or industry guideline GMP will assure the integrity of the value. Although periodic calibration is the conventional solution, the 2,400 scheduled calibrations referenced above present two economic hardships:

  1. The 2,000 calibrations that simply verify that the instruments are still operating within specifications are pure non-ROI cost
  2. The 400 that are out of “spec” create an even more troublesome problem. If the instrument’s process value is critical enough to be a validated instrument that requires periodic calibration, then what happens when it is discovered to be out of calibration? By protocol, must a review of all products that have been manufactured since the last known good calibration occur? Probably yes, because if the answer is “no” it begs the question as to why this instrument was considered a validated instrument. If the instrument is only slightly out of calibration but still within the product/process requirements, the review may be trivial. If it is seriously out of calibration, a comprehensive quality audit or product recall may be mandated by protocol.

Unavailability vs. Integrity

1 of 4 < 1 | 2 | 3 | 4 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments