In my last column, I spoke about measuring the correct parameters of raw materials, not just the traditional compendial tests. That observation carries over to all of the processing steps for solid dosage forms, too. We in the pharmaceutical industry are in a unique position, albeit, not a flattering one.
Virtually all other industry process chemists/engineers/analysts are acutely aware of what is happening in the production stream of their respective companies. They intimately know the input variables, the chemical reactions taking place, the effect of catalysts, the side reactions, and how to affect product yield. Despite the complexity of any reaction, it is, after all, merely a chemical reaction. It follows rules we all learn, starting as sophomores in college.
Before my synthetic chemist friends go ballistic, allow me to explain. All chemical reactions follow well-known paths even the synthesis of drugs, even though pharma people make believe that they are doing magic. The understanding of what is in the pot at any time allows them to purchase analytical instrumentation that will provide control throughout the process, no matter how complex. This leads to standardized instruments and a general lowering of prices of those toys.
After all, cracking of petroleum is the same for ExxonMobil, as for Hess, as for Getty, as for Shell, etc. When you work out the design for the reaction monitor, say for octane number, everybody buys the same one. Thus, organizations such as ASTM can give guidelines for the testing of materials such as petroleum.
Would that were true for pharmaceutical solid dosage forms as well. The current testing is (per my last column, "PAT in Perspective: Safe? Yes. Effective? Not So Much." ) chemical for a physical mixture of powders. We toss them into a blender and spin, shake, rattle, and roll them until we believe, or hope, that there is a resultant well-dispersed (as a physical chemist, I refuse to refer to a powder blend as homogeneous) mass of powder. We can wet and dry this mass to make magic nuggets that can be tableted. We then press it into tablets or fill it into capsules and hope that we are within 10-15% of the label claim. Hmmm
Under the PAT initiative, also aptly named Quality by Design (QbD), we are encouraged to monitor critical parameters of a pharmaceutical process. The problem is, if we knew what the critical parameters were, we would already be monitoring them. Therein lies the dilemma: How can we find an appropriate monitor to measure what we dont know or arent sure of? Lets quickly examine what we have in place at the moment, shall we?
To perform the analyses that we do currently perform, we have a pretty thorough qualification procedure for lab instruments at the moment. We have all the needed documentation and procedures for the installation of the hardware and its associated software, cleverly named Installation Qualification (IQ). This is largely based on the specifications of the instrument manufacturer and normally performed by its representative. This usually entails power requirements, environment, etc. In many ways, this is similar to ISO 9000 requirements: Is the instrument we have compliant with the blueprints for that instrument? But does it run?
The Operational Qualification (OQ) is as it sounds: The instrument performs its functions as prescribed by the manufacturer. That is, it can scan, pump, blink, and chirp just as described in its manual. This testing is a cooperative effort between the sales engineer and the analyst. Virtually any instrument manufacturer worth his salt will have documents that carefully describe the instrument's functions and how to check the operation of each function of the instrument. This will include the calibration procedure for cGMP compliance. For spectrometers, wavelength and linearity standard materials are included (at a nominal fee, of course). Now, we have to prove that it works in our application.
The effectiveness for our analysis is called the Performance Qualification (PQ) and is heavily dependent on the user. It is, in short, a document demonstrating that the unit performs the given analysis per the written SOPs.
All these Qs were, of course, designed for laboratory instruments. When an analyst purchases an HPLC, for instance, he/she knows that the sample will be dissolved in a solution for injection. In this case, one size fits all is appropriate. The difficult part of the setup will be choosing the column and mobile phase, since the hardware is standard. This goes for spectroscopic instruments as well. Mid-range infrared samples are usually mulls, thin (liquid) films, or KBr disks. In the ultraviolet and visible ranges, solutions are made from the samples to be analyzed. Even near-infrared, in a lab setting, usually involves powder cups, dipping fiber optic probes, or tablet holders. All this is like shooting tame birds and requires little choice beyond which company gives good service in your area.
All this is fine as it stands, but one other Q seldom mentioned is the Design Qualification (DQ). The instruments to be used must be designed for some sample paradigm. For lab work, this is pretty straightforward and, for the vast majority of instruments manufactured, this portion falls squarely on the backs of the instrument companies. They attempt to listen to the market and design what will fill a void. Often, these are for the same tests as have been performed in a lab for decades; the new instruments are merely faster, more sensitive, easier to operate, more durable, etc. The input of the consumer is usually just which cuvettes or detector (e.g. HPLC) is required or, now newly discovered by analysts, fiber-optic probes.
This is all well and good for lab work, but now we are trying to do process analysis and its a whole new ballgame. For simply automating a lab assay for at- or on-line work, many reputable companies can easily supply a case-hardened instrument. If we are merely interested in the percent moisture or content uniformity of the API, dozens of instruments spring to mind, all of which will perform nicely.