Recognizing the Role of Mistakes in Quality

Jan. 6, 2009
Our obsession with controlling variation has weakened our skills in mistake-proofing.

After my article on “Controlling Mistakes in Pharmaceutical Production” appeared last year, one reader wrote to politely disagree with the premise that mistakes are the major quality problem in pharma and pointed out that “significant variation is everywhere.” The reader’s statement is true and reflects the most popular opinion that variation is the enemy, but this deserves additional examination.

As the lead engineer on a major Department of Energy (DoE) project in the 1980’s, I used a complex mechanical interface with the Department of Defense in which a single case of mismatch during assembly was unacceptable. Extensive analysis revealed that assuming a normal distribution of tolerances resulted in significant errors in defining control limits as traditionally used in statistical methods. This conclusion was substantiated by National Security Agency studies and by Variation Simulation Analysis, a company that provides software for Monte Carlo estimates of complex tolerance interactions. More importantly, research has not found a single case where variation estimates have been successfully used to accurately predict system-level defect rates! 

Working on my Ph.D. at Stanford, I received quality research data from Motorola, the Quantum Corporation, GE Aircraft Engines, Applied Materials, Ford Motor, Texas Instruments, and Sandia National Laboratories. Data was acquired for processes, products, productions lines and companies. Emphasis was placed on data taken over long, stable periods of production, so that startup and production instabilities would not influence conclusions. Remarkably, we were not able to demonstrate a single correlation between process capability (a measure of the control of variation) and process, product, company, or multi-company defect or non-conformance rates! The International Motor Vehicle Study conducted by MIT and Harvard also found large inconsistencies in quality control performance. 

Whether examining production problems, or reviewing each company's data, the majority of issues were always traceable to mistakes. This was true of a study done by the DoE in the 1960s prior to Six Sigma, when the control of variation was not as good as it is today. More recently, Sandia examined 23,000 production defects and traced 80% to mistakes.
 
GE Aircraft Engines has about 24,000 engines in the field, according to our estimates. After we worked with GE, they began to pay particular attention to the cause of non-conformances (out-of-tolerance parts or processes). It is difficult to imagine a product which is more sensitive to variation than an aircraft engine, with its roughly 10,000 tightly toleranced parts in each assembly, each with many dimensions. GE Aircraft Engines concluded that, of the non-conformances leaving the factory, 50% to 70% were attributable to mistakes. However, in a multi-year study of every response to customer requests for assistance other than scheduled maintenance, GE traced every single problem to a mistake, with only one exception!

Even though variation is everywhere and a substantial source of non-conforming product, the overwhelming evidence shows that mistakes are the dominant problem for customers. Looking at pharmaceutical products, minor variations in dosage, chemical ratios or processing temperatures are not the key problems, even though these can not be ignored and must be controlled. Rather, failing to fill a capsule, getting the wrong material in a capsule, setting the wrong process temperature, mislabeling a product, or having foreign material in the product are major quality issues and customer concerns. Note that each of these problems are rare, random events that can not be described in terms of a distribution or variation, and they exceed in frequency, magnitude and consequence the outcomes predicted by the best variation models. More importantly, variation tools are completely ineffective in either predicting or controlling mistakes, pointing to the essential role of mistake-proofing and visual tools in quality control.
 
To understand why variation is such a strong traditional focus, we need to examine the historical evolution of quality control. Perhaps the single-most important contribution to quality was the deployment of calibrated gauges in early Ford production. The factory-calibrated gauges made it possible to observe and characterize part-to-part differences for the first time, eventually leading to the development of statistical process control. In these early observations, variation was the dominant quality problem, causing most defects and customer complaints.

At the time when defect rate goals were roughly 2%, Juran characterized “Red X” events or special causes as approximately 7% of quality problems. This suggests that special causes alone have always been causing defects in the range of 1,400 parts per million, which is remarkably consistent with established human error rates.

While error rates have remained relatively constant, the control of variation has improved dramatically. Thus, what used to be a small fraction of the total quality problem has emerged as the dominant barrier to quality improvement today. Most importantly, improving control of variation does not contribute anything to controlling special causes. As a result, once basic levels of controlling variation have been reached, greater investment in controlling variation has very little impact on quality performance. Interestingly, companies like Toyota that consistently maintain defect rates below 30 parts per million have the goal of eliminating statistical process control from the factory floor. They achieve higher quality with dramatically less effort and investment.

Unfortunately, we have focused on control of variation almost exclusively, and our skills in mistake-proofing are relatively weak. Novices at mistake-proofing often have difficulty in developing good concepts because this is not emphasized in our training and there are very few examples in our work environment. There are great opportunities for improving quality, but the way that we think about the problems must change to take the next major step forward.

About the Author

C. Martin Hinckley | Ph.D.