May 01, 2013 | Free Downloads | |Share This Article
The tension between after-the-fact inspection and defect prevention has been at the heart of manufacture since the introduction of statistical methods to industry in the 1940s. Today, to clearly divide responsibilities within an organization, and tie these very different activities to existing roles, consider a division by impact to the patient. Activities which monitor a process in real-time to prevent defects while a lot is being manufactured are known as Statistical Process Controls (SPC).
In contrast, activities which occur after manufacture to keep defects from reaching a patient by additional inspection are Statistical Quality Control (SQC). The difference is one of strategy. From the patient’s perspective, SPC’s feedback during manufacture prevents risk while SQC’s feed-forward guards against catastrophic failure. Both are necessary in an industry of low volume, high cost, high risk goods.
Feed-Back Control is traditionally an engineering term for the mechanism by which a process is adjusted real-time to maintain a consistent product. For the most common type of control, we need look no farther than the furnace in our homes to find an example. Imagine on a particularly cold, fall morning you wake up and turn on your furnace. The sensor reads 19 C.
You set the temperature for a comfortable 73F / 23C. The controller decides you need more heat and then opens the gas valve to allow more fuel into the furnace. This continues until the sensor reads 73F / 23C in the room, and the controller regulates the flow of gas in order to maintain the temperature at target. In its entirety, this is a feedback control structure. It minimizes waste and can adjust to changes. You are kept warm in real-time with the smallest amount of discomfort.
Such a control mechanism—that acts upon the inputs of a process to maintain a consistent product output—shares the same logic as Shewhart’s Control Charts. Their purpose is to detect a change from the typical process and act to counter it. As such it is traditionally the responsibility of manufacturing. It is the means by which specific statistical techniques are used to monitor, control and even improve the manufacturing process. It has the advantage of impacting the product quality as product is being made.
Feed-back control monitors some key indicator or quality attribute, detects a change, acts to counter it, and maintains a consistent process average and range. As such it is often a messy affair taking in-process check samples filled with such oddities as false alarms, process adjustments, and even processes that never come into a state of statistical control. While it is possible to combine monitoring of the process with quality oversight—such as parametric release in PAT—the challenges of maintaining a monitoring system that is sensitive enough to detect a change in a process yet capable of handling false alarms is more commonly manufacturing’s role. Manufacturing’s responsibility is to maintain a process better than the specifications.
In contrast, Feed Forward control—also a traditional engineering term—refers to the adjustment of process inputs based solely on information available prior to the process beginning. The keys to feed forward control are measuring a disturbance variable rather than the process output and having a suitable model of the process. Although some process control problems are best solved with feed forward control, the ability to measure a disturbance and then correctly compensate before running the process is a greater challenge in practice on the manufacturing floor. A synonym for this control strategy is “ballistic control.”
As such, you might imagine a simple rocket where information such as distance to target, weight of payload, wind speed, wind direction, etc. are all put into a model of the launch. Once a firing-solution is found, the button is hit and the rocket is sent on its way. Regardless of whether the model was good or not, or whether our measurements were accurate enough or not, once the green light is given there is no more ability to affect the outcome of the process. The analogues to this in pharmaceutical manufacture are unit operations such as lyophilization where control is through manufacturing instructions given in batch records, only finished product samples are taken, and actions are taken based upon the results of this acceptance sampling. This is best summed up as Statistical Quality Control (SQC). These roles and responsibilities traditionally reside in Quality Assurance and Regulatory.
Although many of the SQC tools are shared with SPC, (e.g. ASTM E2281-03 “Standard Practice for Process and Measurement Capability Indices,” ASTM E2500-07 “Standard Guide for Specification, Design, and Verification of Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment,” and ASTM E2709-09 “Standard Practice for Demonstrating Capability to Comply with a Lot Acceptance Procedure”), these are distinctly different activities. Statistical Quality Control does not necessarily use control limits on control charts but rather can be only trended over time or collected into a picture of process capability. Given the large data sets necessary (i.e. 100 to 200 data points) in order to have a meaningful confidence interval on a capability index (e.g. Cp, Cpk, etc) this long-term perspective could be differentiated from manufacturing’s immediate focus.
Unfortunately there persists a confusion between SPC and SQC. It seems the problem may be one of perspective and common purpose. Depending where one is within an organization, when you send information back to the operations ahead of you and forward to operations after you; that direction depends on your location. Certainly one may consider business processes and activities of departments outside of manufacturing as “feeding-back” into their respective systems. However, for a common point of reference the most sensible definition stops at the quality of the lot being manufactured at the time of manufacture versus activities after-the-fact. This is the same point at which the underlying statistics differ and it puts the focus upon the most important part of manufacturing—the process that is running right now. Understanding the differences, we now have two tools for two different roles with a shared purpose consistent with the 2011 Process Validation Guidance of:
Jason J. Orloff, Ch.E. & M.S. Applied Statistics is a Principal Statistical Consultant at PharmStat and a proud colleague of Lynn Torbeck of Torbeck and Associates, both located in Evanston, IL. He is an international consultant specializing in applied statistics and experimental design for pharmaceutical and biopharmaceutical development, quality assurance, quality control, validation, and production under the cGXP's. Current activities include an author of ISPE’s Baseline Guide for Q10 chapter “Process Performance and Product Quality Monitoring”, contributing authorship of the PDA’s Technical Report 59 on “Utilization of Statistical Methods for Production and Business Processes”, and publications in the Journal of Pharmaceutical Technology. Mr. Orloff brings over ten years of experience in manufacturing, quality, and regulatory affairs in the pharmaceutical industry. Areas of expertise include PAT, OOS, SQC, SPC, assay validation and setting specification criteria. A Chemical Engineer with real-life expertise at applying statistics in a highly regulated environment, Mr. Orloff is able to work effectively across all levels of an organization as well as make high level concepts accessible to a variety of audiences. Mr. Orloff has worked with a wide variety of companies including pharmaceuticals, parenterals, biotechnology, fine chemicals, medical devices, food, and nanotechnology. He holds a BS in Chemical Engineering from UW-Madison and an MS degree in Applied Statistics from DePaul University. He may be reached at firstname.lastname@example.org.
Watch Related Videos