background image for GxP Lifeline
GxP Lifeline

A Risk-Based Approach to Validation


The U.S. Food and Drug Administration (FDA) has defined the requirements for validation of life science based products, mostly manifested under the 21 CFR 820 210 and 211 regulations. These regulations include a comprehensive testing process where all systems are thoroughly examined and tested under a criticality-based, scientific approach.

Recent guidance and initiatives released by the FDA, including Process Validation: General Principles and Practices and ICH Q11 Development and Manufacture of Drug Substances, have provided a streamlined, risk-based approach using an updated life cycle management method.

Under this scenario, a new definition of validation has emerged, best described by the FDA as “the collection and evaluation of data, from the process design stage through production, which establishes scientific evidence that a process is capable of consistently delivering quality products.” However, this offers contrasts with the classical definition in the device regulations under <a cfrsearch.cfm?fr="820.75''">21 CFR 820.75, which states “… the results of a process cannot be fully verified by subsequent inspection and test, the process shall be validated with a high degree of assurance and approved according to established procedures.”

What this means is that a risk-based life cycle management approach with relevant scientific rationale and evidence can be used in lieu of a traditional top-down comprehensive approach. Many of us remember the golden rule of validation — testing in triplicate was an output of this classic approach. It didn’t matter the complexity or simplicity of the system, we just always applied the test in triplicate.

Essentially, what the FDA and ICH are now saying is that you can justify a different test plan with a risk-based approach. The results are streamlined validation processes and potentially fewer steps to production. This article presents an approach to risk management that I have used to successfully meet the updated FDA and ICH guidances.

User Requirement Specifications

Whether validating equipment, processes or software, I recommend writing a user requirement specification (URS). This facilitates a starting point with inputs and traceability to ensure that basic functions are established. These basic functions will be used later for assessing risks. Medical device software validation also typically includes functional requirement specifications (FRS) that follow the URS in a logical, traceable way. The FRS shows how the configured software will meet the requirements of the URS.

Risk Assessment

A risk assessment follows the URS and FRS processes. However, before applying a risk assessment to the functional processes developed in the URS and FRS, use the ISO 14971 risk management methodologies to establish the acceptance criteria and risk levels. The standard risk matrix (see graphic below) illustrates a three-level system with low (green), medium (yellow) and high (red) risk categories characterized as follows:

  • Low – Failure would have a minor impact on patient safety or product quality.
  • Medium – Failure would have a moderate impact on safety and quality processes.
  • High – Failure would severely impact safety and quality processes.

Your organization must develop (and justify) your own criteria. This example matrix defines the categories as follows:

  • Horizontal columns – Represent safety, severity and quality.
  • Vertical rows – Represent probability, frequency and detectability.

After developing the acceptance criteria, complete the following tasks:

  • Find the system function categories and add them to a risk assessment table. These categories are determined by reviewing the URS and assessing how each of the requirements correlate to a similar system function (i.e., grouping).
  • Determine the risk associated with each URS function in terms of potential failure or if the function/system is not available (i.e., offline or nonfunctioning).
  • Determine the severity, safety and quality impact on the associated failure.
  • Determine the frequency, probability and detectability of the possibility of failure.
  • Use the acceptance criteria chart to identify overall risk.
  • Place the overall risk into one of the three risk classes: high, medium, low.
  • Complete the same risk assessment on each of the functional items listed in the URS.

Validation Priority Level

After determining the critically for the individual functional items from the URS, you can assemble a validation approach for each functional category. The following are types of validations that can be used with a risk-based process.

  • High Risk – Complete, comprehensive testing required. All system and sub-systems must be thoroughly tested according to a scientific, data-driven rationale. This is similar to the classic approach to validation. Also, it may be necessary to enhance the detectability of failure via in-process production controls.
  • Medium Risk – Testing the functional requirements per the URS and FRS is required to ensure that the item has been properly characterized.
  • Low Risk – No formal testing is needed, but presence (detectability) of the functional item may be required.

These criteria are then applied to the table of functional items. The output is the validation test plan described below.

Validation Test Plan

According to the new guidance for process validation, the collection and evaluation of data, from the process design stage through production, establishes scientific evidence that a process is capable of consistently delivering quality products. This has resulted in validation being split into three stages:

  • Stage 1: Process design – The commercial process based on experience gained from development and scale-up.
  • Stage 2: Process qualification – The reproducible, commercial scale is confirmed on the basis of process design.
  • Stage 3: Continued process verification – To show that the process is in a state of control during routine production.

Manufacturers must prove that the product can be manufactured according to the quality attributes before a batch is placed on the market. For this purpose, data from a lab, scale-up and pilot scale are meant to be used. The data is meant to cover conditions involving a range of process variations. The manufacturer must:

  • Determine and understand the process variations.
  • Detect the process variations and assess their extent.
  • Understand the influence on the process and the product.
  • Control variations depending on the risk they represent.

Qualification activities that lack the basis of a sound process understanding cannot ensure a safe product. The process must be maintained during routine operations, including materials, equipment, environment, personnel and changes in the manufacturing procedures.

Stage 1: Process Design

The process design stage involves building and capturing process knowledge. The manufacturing process is meant to be defined and tested, which will then be reflected in the manufacturing and testing documentation. Earlier development stages do not need to be conducted under current good manufacturing practices (cGMP). Still, the basis should be sound scientific methods and principles, including good documentation practice (GDP).

There is no regulatory expectation for the process to be developed and tested until it fails. However, a combination of conditions involving a high process risk should be known. In order to achieve this level of process understanding, implementing Design of Experiments (DOE) in connection with risk analysis tools is recommended. Other methods, such as classical laboratory tests, are also considered acceptable. Also, it’s essential to include adequate documentation of the process understanding based on rationale.

Stage 2: Process Qualification

This stage shows that the process design is suitable for consistently manufacturing commercial batches. This stage contains two steps:

  • Qualification activities regarding facilities and equipment.
  • Performance qualification (PQ).

This stage encompasses the activities that are currently summarized under process validation. Qualified equipment is used to demonstrate that the process can create a product in conformity with the specifications. The terms design qualification (DQ), installation qualification (IQ) and operational qualification (OQ) are no longer described as part of the qualification. However, they are still conceptually used within the validation plan, which should cover these items:

  • Test description.
  • Acceptance criteria.
  • Schedule of validations.
  • Responsibilities.
  • Protocol with pre and post approval.
  • Change control.
  • Results.
  • Continued validation stage.

Stage 3: Continued Process Verification

The final stage is intended to keep the validated state of the process current during routine production. The manufacturer is required to establish a system to detect unplanned process variations. Data should be evaluated accordingly (in-process), so the process does not get out of control. The data must be statistically trended, and the analysis must be done by a qualified person.

These evaluations are meant to be reviewed by the quality unit in order to detect changes in the process (i.e., alert limits) at an early stage and to allow implementation of process improvements. Still, even in a well-developed process, unexpected process changes can occur.

In this case, the guidance recommends that the manufacturer use quantitative, statistical methods whenever feasible in order to identify and investigate for root cause. At the beginning of routine production, the guidance recommends that the scope and frequency of monitoring activities and sampling is the same as that in the process qualification stage until enough data has been collected.

Analysis from complaints, out-of-specification (OOS) results, deviations and non-conformances can also provide data and trends regarding process variability. Employees on the production line and in quality assurance are encouraged to give feedback on the process performance. It’s also helpful to track operator errors to determine if training measures are appropriate.

Finally, the data sets can be used to develop process improvements. Still, the changes may only be implemented in a structured way and with the final approval of quality assurance and potential re-validation in the process qualification stage.


2014-blog-author-photo-peter-knauer

Peter Knauer is a partner consultant with MasterControl's Quality and Compliance Advisory Services. He has more than 20 years of international experience in the biomedical industry, primarily focusing on supply chain management, risk management, CAPA, audits and compliance issues related to biopharmaceutical and medical device chemistry, manufacturing and controls (CMC) operations. He was most recently head of CMC operations for British Technology Group in the United Kingdom and he has held leadership positions for Protherics UK Limited and MacroMed. Peter started his career at Genentech, where he held numerous positions in engineering and manufacturing management. Peter is currently chairman of the board for Intermountain Biomedical Association (IBA) and a member of the Parenteral Drug Association (PDA). Peter holds a master's degree in biomechanical engineering from San Francisco State University and a bachelor's degree in materials science engineering from the University of Utah. Contact him at pknauer@mastercontrol.com.


Free Resource
MasterControl Validation Strategy FAQ

Enjoying this blog? Learn More.

MasterControl Validation Strategy FAQ

Download Now
[ { "key": "fid#1", "value": ["GxP Lifeline Blog"] } ]