Interested in linking to "PAT Session: Optimizing the Analyzer"?
You may use the Headline, Deck, Byline and URL of this article on your Web site. To link to this article, select and copy the HTML code below and paste it on your own Web site.
By Bill Swichtenberg, Senior Editor
Without process analyzers, there would be no process analytical technology (PAT). It is taken for granted that these instruments perform analysis and report to a process control computer on a 24 hour/365 day basis. This occurs at a cost per measurement from pennies to $1, depending on the process, with virtually no human intervention.
However, this doesn’t mean people are not involved, according to Walter Henslee, former scientist at Dow Chemical, speaking at the Life Cycle Management session at IFPAC. “The human element is the key to the process analyzer’s success.”
In the 1980s, these analyzers had a 33 percent failure rate and were treated as a white elephant or a piece of rework waiting to happen. These analyzers were deemed too unreliable for process control. While this is not the case anymore, particular attention must be given to the people working with these instruments at the plant.
According to Henslee, analyzer problems occur when there is a disconnect in the selection, utilization and maintenance teams that work with the instruments. In addition, the resources (people) available to companies aren’t always identified or empowered to help. He observed that:
After seeing these problems for years, Henslee suggests that companies need a cross-functional work process when utilizing analyzers. There also needs to be a trained, accountable owner of the analyzer, who knows the most cost-effective response when problems occur. “The simple mission statement for these processes is that the analyzers need to work and make money for the company.”
When the process is up and running, follow-up procedures must ensure that it is economically feasible at all times. “Economic success depends more on soft issues and the implementation than on technology,” said Henslee.
These practices might require a cultural change at the facility. Roles and titles are blurred for the greater good of the process. Process designers need to be there at start up as well as train the maintenance personnel. Systems are debugged using the supplier’s field team.
“Maintenance and operations are critical to the analyzer’s well being. You need confidence that it will remain running and make you money,” said Henslee.
Economics of Dependability
The cost of instrument reliability was explored by Jeff Miller, SoHaR Inc., Culver City, Ca. The question was how much to spend to enhance the reliability of the system.
Miller defined the cost of failure as the probability of failure x the cost of that failure. The more likely the failure, the higher the cost, it also is true that the cost of reliability goes up as the process becomes more secure.
“You can add reliability upfront that prolongs failure, but you have to keep in mind the time difference between expenses and the maintenance savings,” said Miller.
There are many ways to make the process/instrument more reliable, but each comes with a cost. For example, you can make the process stronger, reduce stresses, incorporate shut-down provisions or add redundancy. “Why not just put multiple redundancies on the system - because the lower the failure rate, the greater the cost. You don’t get the bang for your buck,” said Miller. The key is to use multiple techniques at the low end of the cost curve for each.
PharmaManufacturing.com is the site for knowledge, news and analysis for manufacturing and other professionals working in the pharmaceutical, biopharmaceutical and biotech industries.