PAT in Perspective: “One Size Fits All” Software May Not Fit PAT

April 27, 2006
Excel, with its rounding errors, is a prime example of software written without the input of “science-types.”

I recently returned from Pittcon 2006, where I was impressed by the variety of instrumentation and software being displayed. At one point, I stood in one of the larger vendor’s booths and listened to the cacophony of sounds. It occurred to me that that was what a LIMS, or laboratory information management system, “hears” as it receives data from all the newfangled monitors being employed for process analytical technology (PAT).

The software must ingest spectra, pH data, (possibly) acoustic energy readings, and it must also make immediate sense of this tower of Babel. Eventually, we expect it to make instant feedback and feed-forward decisions to help us control disparate processes. Some forward-thinking process engineers dream that the process will be nearly self-sustaining. That is, it will measure all the parameters, make decisions and make changes in the blending, drying, compressing and coating of tablets.

While this may sound like a sci-fi dream come true, I hear echoes of “HAL, open the pod bay door.” And the melodious response, “Sorry, Dave, I can’t do that.” We may not in Stanley Kubrick’s 2001, but we are becoming more and more dependant on software. And more and more, programs are made in a “one size fits all” mode. They’re cheaper that way, but that’s where the problems start.

One of the most popular, and powerful, programs on the market today is Microsoft’s Excel. We all use it, either directly or embedded in other programs. However, it has a glaring flaw: It always rounds up.

Let’s return briefly to a kinder and gentler time: undergraduate analytical chemistry class, and the rules for rounding numbers ending in “5” — say, 0.2345. If we round it to three digits after the decimal point, does it become 0.235 or 0.234? The gut answer is to round up, but that is wrong. If every number ending in 5 is rounded up, then all the values are assumed to be too low. That is, we (subconsciously) assume that all the values are too low. That would be a “determinate” error, and must be corrected.

In any measurement, the last digit is assumed to have the lowest accuracy (most likely error). That means a number ending in 5 could as easily end in 4 or 6, which would imply a “random” or “Gaussian” error. To adhere to this principle, then half of such numbers must be rounded up and half truncated. What is a logical way to do this? Well, the general approach is to round down the number when the digit prior to the 5 is even and round up when it is odd. Thus, the above value becomes 0.234.

Why does it matter? I’ll use an example from my past: I was analyzing stability results for a marketed product and the HPLC readings showed 0.05% unknown. This was rounded up to 0.1% and the product was recalled. Had it been truncated properly (zero is an even number), the product would have gone to its printed expiry date. I wonder how often this scenario is played out?

Then, let’s consider significant figures. I have seen the following type of assay value list many times:

101.5%

4

99.6%

3

100.9%

4

102.3%

4

97.6%

3

98.5%

3

Average

100.07%

4+

What is wrong with this? I’ll give you a hint: the last column contains the number of significant digits in each figure. You can figure out what could be done about this (hint: lowest number of significant figures).

What does all this have to do with software? The “one size fits all” type may not work for a number of applications. Most are written by software engineers, but I would submit that someone with a chemometric or statistics background should be involved, as well as a science-type who recognizes what the data should look like and what it is used for. I have seen this happening with some third-party software vendors, but the trend will be slow unless we, as PAT people, demand that these “minor” errors not be present in our software.

Not to belabor the point, but when I was at Sandoz in the early 1980s, our first LIMS system in QC was written by computer programmers. When I tried to integrate the area under a number of HPLC peaks, I got some numbers in the order of 400-500% of theory.

When I expanded the chromatogram, I saw that the cursor wasn’t on the line. The programmers had made it free-floating, not moving along the chromatogram! They had never even seen an HPLC run, much less run one themselves. When two women from our lab took over the computer system, the problems were solved. Did they know software writing? Not at first. But they did understand the final application.

My point is when doing a DQ (distributor qualification) for any instrument, remember that software is an instrument, too. It will take up to $1 million to validate, so it may as well work. Don’t depend on any software blindly. And, Mr. Gates, if you’re reading this, a scientific version of Excel would be a huge contribution to the world.

About the Author

Emil W. Ciurczak is Chief Technical Officer of the Cadrai Group. He has more than 35 years experience in pharmaceutical manufacturing, analytical R&D and regulatory compliance.

About the Author

Emil W. Ciurczak | Chief Technical Officer