For Blood & Biologics

Lisa Walters

Stop the Madness! Process Control Essentials for Effective Continual Improvement
by Lisa M. Walters, Ph.D., Principal Officer of Healthy Solutions Quality Consulting, LLC


Share This Article





It is a dark and stormy night. Your facility is readying for what promises to be a challenging third-party review. You, the Quality Manager, along with your organization's General Manager, are reviewing a variety of the process monitoring data reports to prepare for the upcoming "grilling." One such report provides information regarding response times to the central monitoring alarm system, specifically for the reagent storage refrigerator.

"Fire-fighting" leads to snap judgments which are almost always less than effective!

Your GM asks you why this response time is being monitored. You remind her that this was an issue in the past, and you are continuing to monitor to evaluate the effectiveness of the implemented corrective action. The GM continues to study the report. Looking over her shoulder, you note that the data are moving predictably, they do not exceed control limits and there does not appear to be any violation of statistical rules. The mean average response time is five minutes. You are pleased with this because, before your corrective action, the data were as wild as the Old West, with horribly unpredictable response times.

The GM, however, is not so delighted. She wants those response times shortened, and she wants the results now. Her directive: "Tell them to respond immediately, or prepare to be let go immediately." You explain that won't work, that those measures would not be sustainable. You explain that systemic action is required. You explain that she doesn't understand. She tells you: "Then make me understand." And so you give it a try, stressing the importance of prioritization, variation, appropriate corrective action, and capability—ensuring that you include all the information that now follows:

What's a Quality Manager to Do?

Our organizations have so many issues and yet we have such small amounts of time to deal with these issues in a meaningful way. As a result, we tend to attack whatever comes our way today. This idea is called recentivity, a term coined by Dr. Herbert Simon of Carnegie Mellon University. Recentivity leads to "fire-fighting." Fire-fighting leads to snap judgments which are almost always less than effective!

So much for the role of management as reflective planners!

You need a tool to help you prioritize issues for better management. Enter the Weighted Pareto Chart. Everyone loves Pareto. It's the pizza of quality tools. It's a simple tool that tracks the frequency of events, allowing you to compare which events contribute the most to your issues. By tackling the big contributors, you peel away—layer by layer—the onion of your problems, until problems become the exception and not the rule. But are all issues created equally? No! That's why it's a good idea to weigh your issues in terms of their effects; instead of just evaluating frequency, you can evaluate criticality as well. For example, excessive alarm response times might be weighted more heavily than a missed turnaround time on a routine glucose value. By multiplying the frequency of missed effective response times by some arbitrary number consistently applied to events of similar criticality, like 10, the possibility of a critical event being lost in the noise of less critical events is minimized. You pick the criticality scale; just apply it consistently. Now that you know what you want to tackle, what's next? How about process characterization to determine process variation?

Examine one of those tall columns on the Pareto Chart. It compares excessive response times to alarms. Your challenge is to identify what is going on in that response process by defining the process with data. What data would you select? How about elapsed time from alarm initiation to alarm acknowledgment? That seems to make good sense. The data are readily retrievable from the monitoring system and best of all, the data are not subjective! When would you sample? Hopefully, there aren’t tons of alarms, so data for each shift for each alarm over a pretty solid period of time, like six months, should be readily available. You must have enough data to have credible characterization, so try to evaluate at least 30 data points, understanding that this sample will vary. The raw collection of these data can be made on a spreadsheet, from a printout, or even manually. The key is to collect the data in an organized way.

Charting a Course

You've established your problem process; data is at hand. Now what? Choose your chart!

Several charts are available for process control monitoring with the selection of a chart dependent on the type of data collected. Attribute data are pass/fail data, while measurement data (variable data) are numerical in nature, like reading a value from a gauge or a printout. Attribute data are charted in terms of defects per unit or defective units, regardless of the number of defects on that defective. These charts also take into account variation in the sample size. They include charts like p, np, u, and c. The goal for these monitoring tools is to make the process predictable and then drive the fail data toward zero. Measurement data are charted using such tools as xmR charts (individual moving range), which is the tool you could use for the alarm response elapsed time monitoring. The goal of these charts is to make the process predictable, get it centered on a nominal value and then reduce variation in the process. Other charts are available and any good statistical or quality-related text can provide more insight into chart selection, whether you are collecting attribute or measurement data.

So you've picked your chart and you've plotted your data. Now interpret the story it tells you.

Control Limits and Tolerance Limits

The story might be one of a process in control, with predictable variability, within tolerance limits, devoid of freaks and other patterns of upsetting behavior. Or it might be one of a process in control but beyond the tolerance limits. For example, a teenager who knows he's doing wrong and refuses to comply with your good sense---your reasonable tolerance limits. Or it could even be a story of a wild process, out of control, varying unpredictably, with each data point representing a drunken dart winging its way to the board. The story you see determines your action. But let's clarify at least two points so that you can better understand the plot. Let's distinguish between control limits and tolerance limits, as these two terms are bandied about with little regard to correctness and we can't have that!

Control limits are those limits calculated from the actual process data. These are expressions of the "Voice of the Process." Tolerance limits are imposed by someone, whether that is your customer, your engineer or anyone else who has a stake in what the values should be. That's why tolerances are really an expression of the "Voice of the Customer." In a perfect world (and that's where we ALL live), the control limits will be within the tolerance limits. The problem really comes when the control limits exceed the tolerance limits. So it's imperative that you get your process centered on some sort of nominal value and then reduce the variation around that value so that the control limits are easily within the tolerance. Let's move on and look at each of those scenarios a bit closer.

Really, the first two situations will require similar actions, which are Management Actions. In these situations, the process is predictable. When variation is predictable and inherent, it is termed common cause. To reduce common cause variation, large systemic actions are required, and must wholly change at least one of the classic process inputs: human, method, equipment, materials, measurement systems, and/or environment. A simplistic idea of this would be computerizing a form which used to completed by hand. The computerized form would "force" responses before the next field could be attempted. A difference does exist between the first and second situations and that difference is capability.

Capability is the process' ability to achieve a desired result. It is a calculated value that allows you to understand how often you can anticipate your process not meeting specification. Inherent in this definition is that the process must be predictable to calculate capability. Just like with human beings, you can't know how capable a person is unless he or she is consistent, right? The same is true of a process. The first situation appears to be capable, while the second does not. By calculating the capability indices, you'll understand better exactly how capable your process is. But don't forget! Capability is a statistic and confidence intervals are required. So don't lay out a capability if you can't qualify it with confidence. Again, as with charting, any good stat book can provide the specific calculations and applications of the capability indices.

The third situation presents an out-of-control process which needs to be tamed! To do that, you must eliminate the freaks and other disturbing data patterns by employing some sort of root cause analysis technique and designing corrective action to the established cause. In this case, corrective action will be more in terms of Local Action. For example, you might find a problem related to only one shift. In that case, the corrective action will be targeted to that shift. Surely, you can look at the process inputs again, but the actions applied to those inputs will be specific in nature as opposed to generalized to the larger process. Capability is not applicable to this process, as it is not yet in statistical control.

And that concludes your lecture to the GM. You ask her what she's learned. She tells you that larger, management type actions are now necessary to improve the response times to alarms. She understands that essential keys to data-driven improvement are prioritization, variation, appropriate corrective action, and capability. And she offers you a much-deserved raise!

References

Crossley, Mark L., The Desk Reference of Statistical Quality Methods. ASQ Quality Press. Milwaukee, Wisconsin: 2000.

Duncan, Jack W. Great Ideas in Management. Jossey-Bass Publishers. San Francisco, CA: 1990.

Walters, Lisa M. Introducing the Big Q. AABB Press. Bethesda, MD: 2004.



Lisa M. Walters Ph.D. is the Principal Officer of Healthy Solutions Quality Consulting, LLC (www.healthysolutionsqc.com). She is a certified medical technologist with a Specialty in Blood Banking from the American Society of Clinical Pathologists. She is also a qualified assessor for the AABB (formerly known as the American Association of Blood Banks) and has successfully completed training as an assessor for ISO 15189 as part of the American Association of Laboratory Accreditation (A2LA). As part of A2LA, she serves on Medical and Technical Advisory Board, has developed draft guidelines on Uncertainty in Measurement for diagnostic laboratory testing, and provided training to the assessor corp. She has assisted organizations achieve API product certification, ISO registration, as well as FDA and AABB compliance.

She earned an MBA from St. Francis College in Loretto, Pennsylvania, and a Ph.D. in Management from the California Coast University in Santa Ana, California. Her doctoral research was published as part of the ASQ Quality Press, and she additionally has published Quality-related reference books as part of the AABB Press. She is a consistently invited to speak at local, regional and national conferences.

She currently serves as Visiting Professor of Business Administration at the State University of New York at Fredonia as well as consultant faculty for the Penn State University Continuing Education Department. Dr. Walters is also Six Sigma Green Belt.


Share This Article





Watch Related Videos


Download Free Resources
Product Data Sheet: MasterControl Deviations
Product Data Sheet: MasterControl Nonconformance™
Product Data Sheet: MasterControl Incident Reporting™
White Paper: Effective Nonconformance Management Key to FDA and ISO Compliance
Webinar: Automating and Integrating Employee Training Processes to Ensure Compliance