A Process Capability Roadmap

The unexamined process cannot be improved, and process capability analysis offers new insights. Quality systems specialist Scott Tarpley, a consultant with Light Pharma, outlines the steps involved and prerequisites.

By Scott Tarpley, Consultant, Light Pharma, Inc.

Socrates once wrote that the unexamined life is not worth living. The same holds true for manufacturing processes: the unexamined process may still make product, but it cannot be improved.

Over the last 70 years, riding the “continuous quality improvement” wave set in motion by Walter Shewhart and W. Edwards Deming in the 1920s, many different industries have worked hard to better understand their processes in order to improve them.

A critical tool for improving process understanding is “process capability analysis,” which has proven its worth in automotive, aerospace and semiconductor manufacturing. Used to gauge process performance, the technique is now being used outside of manufacturing, to analyze transactional and design processes.

Process capability analysis is also making its way into the pharmaceutical industry, where drug manufacturers and regulatory agencies are using it to characterize processes. This article introduces a roadmap for conducting a pharmaceutical process capability analysis, and explains some of the basic metrics involved.

“Process capability” can be defined as the comparison of the “Voice of the Customer” (VOC) with the “Voice of the Process” (VOP). VOC, based upon customer requirements, is defined by the specification limits of the process, which are fixed, while VOP is defined by control limits, which are based on performance data and vary over time.

As any process is improved, its variability or “spread” tightens and it is more closely centered between specifications; as this occurs, process capability increases dramatically. A metric, Cpk, was developed several decades ago to compute this comparison between control and specification limits. However, simply computing the Cpk won’t provide any value. Several basic diagnostics must first be performed and analyzed to determine data quality.

Defining upstream specifications

Any process capability road map must start with well-defined specifications. Definition can be challenging for upstream specifications, though. After all, specifications developed during the drug development process are based on science, and experimentation. For example, release parameter specifications are generally proven through physico-chemical studies in the laboratory.

Upstream in the process, however, specifications often aren’t based on science. For example, process engineers involved in drug manufacturing are likely to have less confidence in the specifications set for key raw materials or in-process parameters than they do in the final test specifications designed to meet the consumer’s needs.

If a process capability analysis is to be meaningful, the team carrying out the project must determine how much confidence they have in specifications for upstream parameters. Confidence is critical, because process capability metrics assume that specifications are targeted properly with associated tolerances. Examples would be the Specific Surface Area (SSA) or Particle Size specifications of excipients such as lactose. Are they targeted correctly? Are analysts comfortable with the tolerances set around that target? Further, have correlations been proven between these parameters and downstream quality attributes such as assay, dissolution, or content uniformity?

Measuring the measurement system

The next step in the road map is a “Measurement Systems Analysis” (MSA), designed to assure that the system is capable of measuring the process. The measurement system does not mean the analytical devices alone, but the entire measurement process.

For instance, in a dissolution test, an MSA would not be limited to the analytical equipment in the lab used to perform the actual test. It would also consider other variables or contributing factors, including:

  • the pH of the solution in the vessel where the tablet or capsule is being tested
  • the length of time that the capsules have been absorbing moisture prior to testing
  • the lab technician conducting the test -- his or her skill and other factors influencing performance.
In scoping an MSA for this test, a team must ask, “Could these or other factors cause dissolution test results to vary?” If the answer is yes, the data produced will be questionable, so any process capability metrics that result will be worthless. Keep in mind that there is a major difference between being able to validate an analytical method, and automatically being able to produce the data required for process understanding.

To conduct an MSA, first analyze the variation that exists in the analytical process. Measurement system variation generally occurs because of variation:

  • between gauges
  • within measurements made by a single gauge
  • between the gauge and the material being analyzed
In general, if the measurement system is contributing much of the overall data variation, the process capability analysis should be placed on hold. Using questionable data will not only waste time, but may introduce even more variation by changing settings in the production process.

Determining causes of variability

Several techniques can help you determine measurement system variability. One key statistical tool, Gauge R&R (for “Repeatability and Reproducibility”), was developed several decades ago to analyze the factors contributing to measurement system variation. It is similar to Design of Experiment (DoE) techniques, except that it is focused on the measurement process. Like DoE, the Gauge R&R utilizes a series of trials using multiple runs and parts.

Like DoE, Gauge R&R typically includes multiple operators and replicates measurements for each unit. It can be challenging, but is still possible, to apply Gauge R&R in projects that also involve destructive testing.

But Gauge R&R will only indicate variability in the measurement process. For example, consider two “human gauges” — inspectors performing a visual inspection of a production process. They may agree completely, so there would be no “between-gauge” variability in their “measurements.” However, their measurements may not be accurate. Accuracy can only be determined through a calibration study, which serves to “center” the measurement system.

Other issues should also be considered when conducting an MSA, such as deciding what to sample, how often and how much.

Best-in-class manufacturing operations, across industries, view MSA as an ongoing requirement, and incorporate it in their strategic overall plans, conducting MSAs regularly to probe the measurement process and further control it.

Once it has been proven that the measurement system can produce data reliably and consistently, it is necessary to determine whether or not the process is in statistical control. Generally, two types of statistical variations exist:

  • common cause
  • special cause.
Control charts developed at Bell Laboratories in the 1920s are still the best tools for filtering special-cause variations (“signals”) from common-cause variations (“noise”).

Separating common- from special-cause variations is especially important for the drug industry, which tends to treat all deviations as if they were special-cause events.

Generally, common cause variations:

  • have been a problem for a long time;
  • have not been resolved, although many different tools have been applied; and
  • elicit theories on causes and solutions from everyone in the plant — theories that cannot be proven.
Distinguishing between common- and special-cause variations is especially critical for pharmaceutical companies because many of the analytical tools used in the industry were designed to detect sources of special-cause variability, but cannot be used for common-cause problems.

SPC and control chart basics

For the last 80 years, the system of control charting — also known as Statistical Process Control (SPC) — has helped manufacturing operations assure that they are using the proper improvement strategy.

The strategy and tools used to respond to common-cause variations are radically different from those used to address special-cause variations. Treating a common cause of variation as if it were special actually creates more variability in the process and will never address the problem. This practice forces the analyst to ask an unanswerable question: “What is special about this data point?” In fact, that point, like all the other data points in the process, is really the result of the same underlying process.

To review the basic concepts involved, the elements of a control chart include a centerline, upper control limit (UCL), and lower control limit (LCL). The centerline is normally computed as the mean of the process data. The control limits are computed by +/- 3 standard deviations. Then data points are plotted in time order across the chart.

Figure 1 (Figures 1-5 are in an attached .pdf file, which may be accessed by clicking the "Download Now" button at the end of this article) shows a case in which six data points are outside specification limits. In the traditional approach, each point outside of the specification limits is treated as a special cause. However, when control limits are taken into account, as shown in Figure 2, it is seen that the process is free of special-cause variation, and contains nothing but common-cause variation, or “noise”.

Different charts are used for different situations – typically depending on whether the data are being sub-grouped or not. Also, there are many tests for special cause variation. Three most commonly used special cause tests are used when:

  • any data point is outside three standard deviations from the process mean
  • seven consecutive data points are either increasing or decreasing (also called “trend”)
  • seven consecutive data points are on the same side of the process mean (also known as “shift”).
This article couldn’t possibly address the various charts or tests required, but it is important to understand that statistics alone drive the process of separating signal and noise via control charts.

The VOP is computed directly from the process – unlike the “target limits” or “alarm limits” prevalent in so many pharmaceutical manufacturing operations, which often lack scientific or statistical basis. One pharmaceutical plant, for example, tracked four different sets of limits, for alarm, target, control and specification, on a single chart. Not surprisingly, the plant’s manufacturing team never referred to the chart.

Stability doesn’t ensure capability

A “stable” process is free of special-cause variation, and is a prerequisite for computing Cpk or any other process capability metric. Unstable processes are also unpredictable, so any probability computations involving them will be invalid.

The roots of special-cause deviations (or signals) should be investigated and understood before any attempt is made to compute a legitimate process capability metric. If the special cause can be explained, if its likelihood of recurring is extremely remote, or procedures are in place to prevent its occurrence in the future, then and only then should data for that special cause be deleted from the process data set.

However, it is important to remember that just because a process is stable does not guarantee its performance. Figure 3 shows a process that is stable but incapable of meeting specifications.

Figure 4 shows a case where a process is unstable, yet capable.

Once process stability has been proven, the final diagnostic before computing Cpk is to test the data for normality, to determine whether data follow the normal, or Gaussian, distribution. This Normality Test is simple and can be run with most commercial statistical software packages. Such testing is vital, however, because the Cpk and most other preferred process capability metrics assume the data are “bell shaped” and their probabilistic assumptions are based on the Normal Distribution.

If the data are skewed, the mean will likely be significantly skewed as well, and the resulting metric will be misleading. If the data are not normally distributed, they should be transformed using techniques such as Box-Cox, which can be performed easily using most statistical packages.

Once data pass the normality test, Cpk can be computed.

The Cpk is computed as the minimum value of the Cpk-U and Cpk-L, where the process capability is analyzed versus the Upper (Cpk-U) and Lower (Cpk-L) specifications in parallel. By taking the minimum of these two values for the overall Cpk of the process, a more stringent statistic is imposed that provides a penalty for lack of centering. The formulas involved are as follows:

Cpk-U = (USL – Mean)/3s Cpk-L = (Mean – LSL)/3sWhere s = standard deviation (of process data)If the VOP is within the VOC, or the control limits are within the specification limits, this means process capability is strong. If either control limit (UCL or LCL) is outside either specification limit (USL or LSL), then the process capability is weak. Therefore, a Cpk less than 1 indicates poor process capability.

If Cpk-U = 1, then…1 = (USL – Mean)/3sand… USL – Mean = 3sand… USL = Mean + 3s when Cpk-U = 1s = standard deviationThe right hand side of the equation, Mean + 3s, is also known as the Upper Control Limit (UCL). Therefore, when Cpk-U = 1, the UCL is located exactly at the USL. In effect, VOP data are manifesting themselves precisely at the borderline of what the customer will accept. If the Cpk-U is less than 1, the UCL is outside the USL. Of course, as the Cpk increases, the stronger the process capability.

The same analysis could be performed for the Cpk-L.

In summary, determining process capability provides far more insight into any pharmaceutical process performance than simply computing the percentage of batches that pass or fail each year. Remember that high process capability guarantees a high percentage of passing batches, but a high percentage of passing batches cannot guarantee high process capability.

Process capability analysis is not the only technique available for improving process understanding. However, given FDA’s new science-based regulatory framework and the promise of “safe haven” for manufacturers that demonstrate process knowledge, the practice promises to become a more important tool for pharmaceutical manufacturing professionals in the future.

About the Author
J. Scott Tarpley is a consultant with Light Pharma Inc. He has worked in quality engineering and manufacturing for 15 years across multiple industries, and has vast experience in Six Sigma, Total Quality Management, and other quality initiatives. He has a B.S. Management Science and an M.S. in Statistics, both from the Georgia Institute of Technology. He can be reached via email at scott.tarpley@lightpharma.com

Editor's Note: All figures referred to in this story are contained in a .pdf file which may be obtained by clicking on the "Download Now" button below.
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments