Lazy or criminal?

Dec. 8, 2007
I had just submitted my piece on stability testing for Therapeutic Dose when a thought hit me: why do we routinely tolerate unknowns in our stability samples? According to ICH Q3B(R2), if a patient's daily dose of a drug is greater tha one gram, 0.05% "unknown" material is allowed. In a dose equal to or less than one gram, 0.10% is allowed. My question is relatively simple: "If we can detect, qualify, and quantify amounts in blood (during the clinical process) down to picogram amounts, why do we have "unknowns" in the 1000 ppm (milligram) range? One would think that, in the seven to ten years it takes to get a product on the market, that someone would have tried GC/MS or LC/MS or some such to determine what these "unknowns" were. The EPA sets limits by the sensitivity of available equipment (among other criteria); as sensors become better, companies are expected to ID trace materials. Why then  is the pharmaceutical industry above (below?) that? Surely curioisity alone should lead someone to say, "Hey, why not see what that @!#$% peak is?" Getting back to the question posed in the title; is it just laziness or criminal to NOT require companies to find out what their products degrade into? After a company sells the product for years and performs thousands of stability-indicating assays, couldn't just one of them also be used to find out what we're taking with the API? Just because the ICH doesn't require us to find out, doesn't mean we shouldn't find out, does it?
I had just submitted my piece on stability testing for Therapeutic Dose when a thought hit me: why do we routinely tolerate unknowns in our stability samples? According to ICH Q3B(R2), if a patient's daily dose of a drug is greater tha one gram, 0.05% "unknown" material is allowed. In a dose equal to or less than one gram, 0.10% is allowed. My question is relatively simple: "If we can detect, qualify, and quantify amounts in blood (during the clinical process) down to picogram amounts, why do we have "unknowns" in the 1000 ppm (milligram) range? One would think that, in the seven to ten years it takes to get a product on the market, that someone would have tried GC/MS or LC/MS or some such to determine what these "unknowns" were. The EPA sets limits by the sensitivity of available equipment (among other criteria); as sensors become better, companies are expected to ID trace materials. Why then  is the pharmaceutical industry above (below?) that? Surely curioisity alone should lead someone to say, "Hey, why not see what that @!#$% peak is?" Getting back to the question posed in the title; is it just laziness or criminal to NOT require companies to find out what their products degrade into? After a company sells the product for years and performs thousands of stability-indicating assays, couldn't just one of them also be used to find out what we're taking with the API? Just because the ICH doesn't require us to find out, doesn't mean we shouldn't find out, does it?
About the Author

nirprof | nirprof