Don’t Drift out of Control

Oct. 23, 2008
For sensors, data integrity is the measure of their performance.

There exists a market need, dictated by the FDA, to maintain the accuracy of sensors in a validated pharmaceutical process. Today, this is achieved by: 1) installing “certified” instruments; and 2) maintaining a costly routine calibration protocol.

FDA’s process analytical technology (PAT) initiative has opened the door to a fresh look at applying technology for productivity improvements in the pharmaceutical industry. The application of online, real-time analytical instruments was the first trend of the PAT initiative.

This paper addresses another aspect for cGMP – data integrity. It takes a novel approach to maintaining data integrity through the use of redundancy and statistical analysis. The result is reduced calibration cost, increased data integrity and reduced off-spec uncertainty.

Today, pharmaceutical companies write elaborate calibration protocols that are consistent (and sometimes overly compliant) with FDA cGMP guidelines to maintain the reported process value integrity. This can result in extremely high cost for compliance with only a minimum ROI for improved productivity or product quality.

For example, one pharmaceutical site in New Jersey conducts about 2,900 calibrations per month. Of those, about 500 demand maintenance where the instrument has clearly failed as evidenced by a lack of signal or a digital diagnostic (catastrophic failures). The remaining 2,400 calibrations are scheduled per protocol. Of these, only about 400 calibrations find the instrument out of calibration. The majority, about 2,000 calibrations per month, find the instrument still working properly.

Professionals at other pharmaceutical manufacturing facilities can check orders from the metrology department and obtain the exact ratio for their facility, and might be surprised to find similar numbers. This paper describes an alternate instrument scheme consisting of the use of redundant sensors and statistical analysis to avoid unnecessary calibrations and to detect sensors that are starting to drift before they go out of calibration.

The new approach is:

  1. To install two dissimilar instruments to sense the critical (cGMP) value
  2. To track their relative consistency via a statistical control chart
  3. Upon detection of the two values drifting apart, to determine which instrument is drifting as a function of the relative change in the individual instrument’s change in standard deviation
  4. To use the process alarm management system to alarm the operator that a. the sensors are drifting apart b. most likely, the faulty instrument is the one with the changing standard deviation.

If there are no alarms:

  1. Both instruments are tracking
  2. The operator and control programs can assume there is high data integrity
  3. There is no need for routine calibration.

The economic justifications of this approach are:

  1. Hard savings: Cost of second instrument vs. periodic calibrations
  2. Soft savings: Cost of auditing product quality for everything that was afected by the failed instrument since its last calibration

In light of the high frequency and high cost of performing calibrations in a validated environment and the downside risk and cost of quality issues, the potential savings can be huge. Therefore, the life cycle cost can warrant the increased initial investment in a second instrument and the real-time statistical analysis of the instrument pair.

Calibration Basics

Let us begin by establishing a base level of understanding of instrumentation calibration. Precise, dependable process values are vital to an optimum control scheme and, in some cases, they are mandated by compliance regulation. Precision starts with the selection and installation of the analog sensor while the integrity of the reported process value is maintained by routine calibration throughout the life of the instrument.

When you specify a general purpose instrument, it has a stated accuracy—for example, +/- 1% of actual reading. In the fine print, that means that the vendor states that the reading of the instrument will be within 1% of reality 95% of the time (certainty). For example, if a speedometer indicates that you are traveling at 55 mph and the automobile manufacturer installed a +/- 1% speedometer, then you do not know exactly how fast you are going but there is a 95% probability that it is somewhere between 54.45 and 55.55 mph.

There are two reasons why this situation is acceptable when 5% of the time the instrument is probably reporting a value that is more than 1% inaccurate:

  1. Cost / value tradeoff: the inaccuracy will not effect production or quality
  2. The next reading has a 95% chance of being +/- 1% of reality, therefore placing it within specs If you need to improve the accuracy of the values, you can specify:
  • A very accurate device—tighter tolerances, 1% to 0.1%, and/or
  • A higher quality device—higher manufacturing standards, 95% to 99% certainty, and/or
  • Wet calibration—to “certify” the calibration of the actual device meets manufacturing standards.

Once installed, periodically re-calibrating the instrument based on drift specification provided by the instrument vendor, owner/operator philosophy or industry guideline GMP will assure the integrity of the value. Although periodic calibration is the conventional solution, the 2,400 scheduled calibrations referenced above present two economic hardships:

  1. The 2,000 calibrations that simply verify that the instruments are still operating within specifications are pure non-ROI cost
  2. The 400 that are out of “spec” create an even more troublesome problem.

If the instrument’s process value is critical enough to be a validated instrument that requires periodic calibration, then what happens when it is discovered to be out of calibration? By protocol, must a review of all products that have been manufactured since the last known good calibration occur? Probably yes, because if the answer is “no” it begs the question as to why this instrument was considered a validated instrument.

Unavailability vs. Integrity

We recognize the distinction between catastrophic failure resulting in unavailability and loss of integrity. A catastrophic failure is one in which the instrument does not perform at all, while a loss of integrity failure is one in which the instrument does not perform correctly. A catastrophic instrument failure is easy to detect because there is no signal coming from the instrument or, in the case of an intelligent device, there is an ominous diagnostic message. 

A catastrophic failure renders the instrument unavailable. A loss-of-integrity instrument failure is more esoteric because there is a signal but it is not accurate. Moreover, it is insidious because there is no inherent way to recognize that the reported value is inaccurate. Unlike most manmade devices, an instrument may not show symptoms of degradation before it fails catastrophically. If the value is so critical that process management cannot operate without it, then conventional wisdom suggests the installation of two instruments for increased availability.

However, the primary flaw in conventional wisdom is that a second instrument only increases availability and minimizes the probability of a catastrophic failure (“If one breaks, use the other” value). Unfortunately, the addition of a second instrument does not improve the data integrity. In fact, redundant instruments expose the issue of data integrity! What happens if the two instruments are not communicating the same value? How far apart can the values drift before remedial action is initiated? Do you trust one instrument and not the other while the remedial action is being performed? Which signal can be trusted? Availability does not address quality or performance. For sensors, data integrity is the measure of their performance.

So what should you do when the reported values from two instruments that are supposed to be sensing the same thing differ? If they are close enough, you can use either value, “A” or “B”, or both, “(A + B) / 2”. The question is what is “close enough”?

Some prefer a very liberal tolerance such as the sum of the stated accuracies of the two instruments—i.e., 1% + 1% = 2%. However, most prefer their root mean—that is, the square root of (1%² + 1%²) = 1.4%.

If they are too far apart:

  • Use “A” or “B” or (A + B) / 2 anyway or,
  • Use the last value or,
  • Use the lower value or,
  • Use the higher value or,
  • Stop everything and call maintenance.

Breakthrough

Advancements in computer power and statistical analysis tools now make it possible to make a major breakthrough and solve the reliability problem. The solution is to:

  • Install two different types of instruments to measure the process variable
  • Analyze their signals to verify they are properly tracking each other
  • Alarm operations and maintenance when they begin to drift apart
  • Identify the suspected failing instrument.

The use of dissimilar instruments reduces the possibility of common cause failure that may result if identical instruments were damaged by a process anomaly and happened to fail in a similar fashion. The analysis would provide a constant level of assurance that the process value has data integrity. Its analysis will constantly verify the two instruments are not drifting apart.

Since they are dissimilar instruments, it is highly unlikely that they would go out of calibration at the same time, in the same direction and at the same rate as to avoid statistical recognition that they are drifting. If the two signals begin to drift apart, the analysis will alarm the situation. Finally, the analysis engine will then analyze the historical data and identify which signal is behaving differently. The assumption is only one instrument will fail at a time.

Therefore, analysis of historical data will determine which instrument’s performance is changing. If both are changing, then there has probably been a change in the process, and the entire process needs to be examined. This is a vital and new insight that previous, single-sensor topographies were unable to provide.

It can dramatically:

Increase:

  • the overall integrity of the control system
  • the quality of production
  • the productivity of the metrology lab

Decrease:

  • the mean time to repair (MTTR) of failing sensors
  • the cost of remedial quality assurance reviews that are mandated when a critical (validated) sensor is discovered to have failed some time in the past.

The breakthrough is the fact that the statistical method will detect very slight shifts in the two instrument signals long before an instrument violates a hard alarm limit, such as the square root of the sum of the accuracies. This method of real-time calibrating and alarming on the slightest statistically significant deviation renders conventional periodic calibration practices obsolete.

We can reduce the cost of unnecessary periodic calibrations caused by the discovery of a critical instrument out of calibration during a periodic calibration—by declaring the calibration protocol to calibrate whenever (and only when) the statistical alarm indicates a deviation in values, which minimizes the impact on production and product quality.

Proof of Concept

The proof of concept is a series of alpha level tests. A 1,000-sample test data using random numbers to simulate sensor noise and biasing the noise to simulate drift was created using Microsoft Excel. The data was then analyzed in a statistical tool, MiniTab. Sensor A values equal a random number (0 to 1.0) added to the base number 99.5. This simulates a field signal of 100 +/- 0.5. That is a sensor reading 100 +/- 0.5%. The random number simulates the noise or uncertainty of an instrument that is designed to provide a value that is +/- .5% accurate.

Sensor B is identically structured but uses a different set of random numbers. Results indicate that statistical control charts and F-test calculations identify drift and the suspect sensor much faster than conventional detection. Typically, in two-sensor configurations, the alarm limit to recognize two sensors are reading different values is set at the root mean square of the stated accuracies of the two instruments.

Preliminary research found that the statistical method is more sensitive. Drift can be detected in about onethird of the movement, thus giving operations and maintenance even more time to respond to the drifting instrument before it goes officially out-of-calibration and triggers other remedial actions that are dictated by protocol to assure product quality. First test evaluates the comparison of sensor A and sensor B. The first control chart (Figure 1) evaluates the absolute value of the differences between Sensor A and Sensor B. 1,000 pairs of values in subgroup size = 25. This simulates capturing pairs of sensor readings at some convenient period—say, once a minute, an hour, a fortnight—and executing an Xbar control chart on the last 1,000 readings.

The control chart finds the sensors in statistical control. There are no alarms, as expected, because the simulated signal noise is the result of a random number generator and we have not yet injected any error/bias/instrument drift. The second test evaluates the 1,000 pairs of data with bias added to Sensor B. The Sensor B oldest 500 values are identical to the first test. The Sensor B most recent 500 values have been altered by adding 20% of the random number to the sensor’s value. That is, Sensor B values for 1 to 500 are 99.5 + random B. Sensor B values from 501 to 1000 are 99.5 + random B + 0.2 * random B.

This simulates an instrument starting to become erratic. The second control chart (Figure 2) finds the absolute value of the differences between Sensor A and failing Sensor B out of statistical control. You will notice that there are alarms, as expected, because the simulated signal noise has been slightly amplified to simulate that Sensor B is becoming a little erratic and may be beginning to drift. The red numbers superimposed on the chart are the statistical alarms. The numbers represent these alarm criteria: 1 = One point more than 3 sigma from center line 2 = Nine points in a row on same side of center line 3 = Six points in a row, all increasing or all decreasing 4 = Fourteen points in a row, alternating up and down 5 = Two out of three points more than 2 sigma from center line (same side) 6 = Four out of five points more than 1 sigma from center line (same side) 7 = Fifteen points in a row within 1 sigma of center line (either side) 8 = Eight points in a row more than 1 sigma from center line (either side) Notice also the sensitivity.

With conventional dualinstrument configurations, the alarm limits are set at +/- root mean squared (RMS) of the sum of their stated accuracies [i.e., square root of (0.5%² + 0.5%²) = 0.707%]. The mean difference is statistically about half that at, in this data set, 0.338%. The Xbar/R Chart detected drift when the mean difference between the Sensor A values and the Sensor B values drifted by a mere 0.023%. In other words, the control chart detected a change in the comparable performance of the two sensors when the noise of one instrument grew by only 20%.

Both instruments are still tracking one another and well within the conventional delta of 0.707% but one is beginning to become erratic. It is now time to calibrate Sensor B. Meanwhile, the system has maintained its integrity. Now we repeat the test with injecting error equal to 50% (i.e., we let the random number that is simulating noise in Sensor B have an increase of 50% in amplitude for the newest 500 instrument readings). Notice (Figure 3) that the mean value between the two instrument readings is still only 0.4108 but clearly the control chart has correctly detected an increased erratic behavior. The next control chart (Figure 4) simulates an instrument starting to lose sensitivity. This could happen if it has fouled, become sluggish or is about to go comatose. It charts the absolute value of the differences between Sensor A and failing Sensor B. Again, sensor B’s oldest 500 values are identical to the first test. Sensor B’s most recent 500 values have been altered by subtracting 20% of the random number to the sensor’s value. That is, Sensor B values for 1 to 500 are 99.5 + random B.

Sensor B values from 501 to 1000 are 99.5 + random B - 0.2 * random B. Notice that a drop in the responsiveness (randomness) of the signals by a mere 20% can be detected. You will notice that there is at least one alarm (as expected) because the simulated signal noise has diminished to simulate that Sensor B is losing sensitivity—becoming flatlined. Again, notice the sensitivity. The Xbar/R Chart detected drift when the mean difference between the Sensor A values and the Sensor B values closed by a mere 0.012%.

In other words, the control chart detected a change in the comparable performance of the two sensors when the noise of one instrument decreased by only 20%. Both instruments are still within calibration, but one is beginning to lose its integrity. If we eliminate 90% of the noise from sensor B, simulating a non-responsive, flatline, comatose instrument, we get what is seen in Figure 5. Again, the mean difference has not drifted too far apart but the statistical analysis has recognized the lack of responsiveness. It is now time to calibrate.

Sensor reliability and data integrity is assured as long as the pairs of sensors continue to pass the statistical analysis review. When they fail, an alarm generated by the realtime control chart can be triggered through the process control alarm management system. Thus, there is no need to perform periodic calibrations. However, when statistical analysis detects an emerging problem, it is necessary to continue operations while the problem is remedied.

Further statistical analysis can also be used to identify the most likely cause of the alarm. With this information, operations can continue and the MTTR can be shortened.

Summary

In conclusion, the FDA has been encouraging the use of statistics and other technologies as part of the PAT initiative. This paper has demonstrated that we can dramatically improve the data integrity of the critical instrument signals through redundancy and statistical analysis. In fact, one could argue that if statistical calibration is done in real-time, then costly periodic manual calibration is no longer necessary. Significant improvements in data integrity will reduce the life cycle cost of the instrumentation and calibration functions and minimize the downside risk of poor quality or lost production caused by faulty instrumentation.

References

  1. Dieck, R.H. Measurement Uncertainty: Methods and Applications (3rd Edition). ISA (2002).
  2. Sincich, T., Levine, D., Stephan, D. Practical Statistics by Example Usin Microsoft Excel and Minitab (2nd Edition).
  3. Prentice-Hall (2002).
  4. Friedmann, P.G. Economics of Control Improvement. ISA (1995).
  5. Chowdhury, S. Design for Six Sigma. Dearborn Trade Publishing (2002).
  6. Van Beurden, I. “System Availability Evaluation.” White Paper Version V1, revision R2. Exida (May 5, 2005).

About the Author

Dan Collins is Manager of DCS Solution Partner Program for Siemens Energy & Automation, and has more than 30 years of industrial instrumentation and controls experience. Dan is a certified Six Sigma Black Belt, holds a B.B.A. in marketing from the University of Notre Dame, a B.A. in physics from Temple University, an M.E. in industrial engineering from Penn State University and an M.B.A. from Rutgers University.

About the Author

Dan Collins | Manager of DCS Solution Partner Program