Pharmaceutical manufacturing may be tougher than it has ever been before. It seems like once one has tackled everything, integrated best practices, purchased state-of-the art equipment, and hired consultants to help wring every last bit of profit out of the process, reduced margins throw a monkey wrench into the equation and the company’s leadership demands staff cuts, then slashes the capital investment budget.
Finding the Money
Where can one look to for better profitability? Plenty of people are looking in some pretty strange places and doing some pretty unusual things. In pursuit of better margins, some are making colossal mistakes as they attempt to improve profits in today’s tough pharma environment. One thing’s for certain: No one can continue to do the same thing over and over again and expect different results — otherwise known as the definition of insanity. Like many people, trying one approach after another to lower costs is not an effective strategy. If that sounds familiar, it could be time to find new, more resourceful ways to get the job done, and discover more control over one’s destiny.
Pharmaceutical manufacturing and packaging has changed. Could it be time to make those adjustments one’s been putting off? Because in all this change there is opportunity, but such opportunity brings with it the inherent risk of failure. But failure is instructive and one can learn from the mistakes of others, so in that spirit, following are three common mistakes pharma executives make when attempting to increase manufacturing productivity.
Thinking There is an Accurate Picture of Downtime
Convincing someone that they “don’t know what they don’t know” can be extremely difficult. While speaking with plant operations people, GMs, VPs, etc., they often tell me how much they’ve spent on state-of-the-art equipment, how well they’ve adopted Six Sigma, how they’ve squeezed every last drop of cost out of their manufacturing process, etc., but remain frustrated because their lines aren’t living up to expectations.
Most of the executives running plants that I’ve encountered claim they track it, document it, analyze and minimize it — and that seems like so much “hogwash.” It’s not that executive managers don’t try, it’s because the process in which measuring and documenting downtime in most facilities is extremely inaccurate — and in many cases — not uniformly defined, or practiced from plant-to-plant or even line-to-line.
A lot of bonuses, ratings, and pats-on-the-back are tied to reporting good key performance indicators, and there is a deeply ingrained cultural bias against making downtime look too big or too bad. Over time, many plant operators have come up with methods for measuring efficiencies that omit the biggest losses. For instance, one can inflate one’s efficiency numbers if time related to clean-up, changeover, start-up, preventative maintenance, material shortages, breaks, meetings, training, etc. is omitted. In essence, if one only measures efficiency when lines are running successfully, one can report pretty good looking efficiency numbers. Everybody gets their bonus, but the company loses because this “look the other way” or “minimizing” approach conceals the underlying problems, that once fixed, could kick efficiency into high gear.
Many plants that are routinely reporting a line efficiency of 80-85% find that when implementing a more rigorous measurement criteria, such as Overall Equipment Effectiveness (OEE), that their true OEE is in the 50% range — or less. This can be a shocking discovery for middle managers who likely fear repercussions from management, so it’s critical that top management be involved in establishing a reward system based on accurate measurement of manufacturing productivity and foster a culture of improvement, rather than a culture of reporting the highest number. A lower starting number represents more potential for improvement. For example, if a line that is running at an OEE of 50% improves to 55% by developing rapid changeover methods, this correlates to a 10% boost in output.
Another common problem of human management is the under-reporting of downtimes. A situation that took the manager a reported five minutes to resolve may have actually have taken 20 minutes. What do you think gets reported? And here’s something else to ponder: Doesn’t it seem strange that all problems start at times like 10:10, 8:45, or 2:30, and are resolved in round numbers like 5, 10, or 45 minutes?
A Revealing Phenomenon
A very revealing phenomenon is to observe a line that implements a system with fully automated recording of downtime incidents. What do you think happens? Under these circumstances it is common for downtime incidents to increase ten-fold. Did the automatic reporting introduce problems? No! But it now faithfully reports every incident, in a very precise way: No emotion, no fudging. For instance, a typical pharmaceutical packaging line may have 1,000 short stop failures per week, averaging just 1-2 minutes in length, but each eats away at the line’s productivity. At first, this thought terrifies, but in time you have so much more feedback about your line, you can see and correct a whole series of problems that may have been hiding in the background. Conclusion? Systems that don’t automatically collect logged data significantly under report downtime. This makes it much more difficult to identify the real root causes.
Thinking the Wall's Been Hit On Asset Utilization
A manager has struggled through every asset utilization scheme he or she could find. That person’s optimized, been consulted, and Six Sigma’d until they were blue in the face. No matter what was tried, they just were not able to squeeze any more asset utilization out of the lines. Everything that can be done has been done, right? Research shows probably not. Most lines have an entire “new” layer of growth in asset utilization, hidden in full view. This layer lives in the following list; can you spot the areas that may need work?