Aug 12, 2011 | Free Downloads | |Share This Article
This article provides an overview of the elements of transfer with references to supportive documents that contain more detail on the areas of interest regarding transfer and validation.
The USP (United States Pharmacopeia) has proposed a new General Information chapter published in the PF volume 35 September October 2009. The USP defines transfer of an analytical procedure as "the documented process that qualifies a laboratory (a receiving unit) to use an analytical test procedure that originates in another laboratory (the transferring unit also named the sending unit) ". The USP details several categories of transfer-comparative testing: co-validation, method verification or revalidation, and transfer waivers. The stimuli article also reviews the procedural elements that are recommended for successful transfer. Of these elements, a preapproved protocol is required. The basic content of the protocol is described as well as the process of reviewing the analytical procedure prior to the transfer. There is also a brief paragraph on the contents of the method transfer report and there is reference to statistical approaches for what USP refers to as statistically similar comparability of comparison of procedures, published in USP PF volume 35 (3). The stimuli to the revision is open for comment and can be found on the USP website. This USP stimuli to the revision provides a good basic structure for the transfer process. Figure 1 has been drafted using the basic content of the USP stimuli.
Now let's spend some time looking at the types of transfer. The definition of co-validation is that there is validation occurring in both the receiving and the originating laboratories. Sometimes the validation is planned so that the receiving laboratory is performing the validation (as per Q2R1) with the sending laboratory participating in the intermediate precision section of the validation. When this occurs, it is still important to determine lack of bias between the two laboratories. The difference between co-validation and validation is that the former has a full validation occurring at both sites while the latter has validation activity taking place in the receiving laboratory only ( often because the originating laboratory has already received a full validation). Terms such as co-validation, validation, transfer and co-validation, and method verification should be defined within the protocol. Figure 1 provides an overview of the transfer process elements.
Figure 1: Components of Transfer
We use the same term ‘transfer’ for the transfer of a developed, qualified, or validated method. However, the exact approach that one uses to define, monitor, analyze and approve a transfer can be different depending on the stage of the method. Nevertheless, there are common elements of transfer: the use of a protocol, the determination of the feasibility of the lab to receive the method, a method procedure, and the generation of the final report. It is the details within the protocol that can be different. For example: the statistical comparison, the type of material available for the transfer and the nature of the round robin testing. Another common element within the statistical comparison is to ensure that the transfer assesses both the variability of the method as well as the absence of bias (precision and accuracy).
As one proceeds through the transfer process, there are many elements of the transfer which can be considered, depending on the type of transfer and the stage of the method (Figure 1). Some of these elements are outlined in Figure 2.
Figure 2: Transfer Requirements
How is the transfer judged? We will spend a little time reviewing the statistical approaches that will indicate comparability between sites (see following subtitle: Basic Statistics) with a list of reference material that provides this analysis in more detail. The basis on which we define acceptance criteria depends on the availability of data. This data includes the specification (if it exists), the justification for the specification, historical performance of the method, and the criteria used in existing validation reports for the method (as well as the performance of that method in the validation). As the method proceeds through its lifecycle from development to qualification and validation, the availability of historical data and performance by which to judge transfer increases, as does the availability of material, both of which can affect the statistical approach for assessing transfer.
The transfer of a method is the comparison of the performance of the method between the two groups. This concept is similar to that of the comparison of the performance of two methods in the same group. An examination of statistical approaches for the former can be found in the references by Ellison et al while the latter is explained well in the USP reference PF 35(3). The concept of ‘equivalent or better’ and the statistical concept of equivalence are explained well in the USP reference and has application to the transfer of a method. The type of statistics may be similar but the choice of materials may be very different when trying to show two methods are performing comparably rather than one method giving the same results (with the same variability) when tested in two different places. For more detail regarding the specific applications of statistical methods please see the "References" subtitle at the end of this article.
Lack of bias or comparison of means can be examined by a t-test (if comparing two groups) or by an ANOVA test if comparing more than two groups of data. For a confidence interval, 90 or 95 percent is often selected. The comparison of precision can be done by F-test for two groups or an ANOVA for more than two groups of data. The equivalence of two groups can be assessed through the application of TOST or two one-sided t-tests. Visual assessments of data organized by various types of plots such as Bland-Altman is also helpful. The reference book by Ellison et al “Practical Statistics for the Analytical Scientist, A Bench Guide” provides a straightforward approach to the choice of statistics, with the first chapter devoted to ‘Choosing the Correct Statistics’ and subsequent chapters providing details regarding chosen statistics.
For a good discussion on the equivalence of results by use of intraclass correlation coefficients or the concordance correlation coefficient, the USP 35 is a good reference.
The USP Chapter <1010> analytical data-interpretation and treatment provides basic statistical approaches for evaluating data, comparing methods, identifying statistical outliers and providing a review of the mathematical foundation for the assessment of variability, accuracy, confidence intervals, t-tests, and outlier tests.
The article by Limentani et al entitled ‘Beyond the t-test: Statistical Equivalence Testing” provides a good overview of the t-test, the two one-sided t-test (TOST) and the concept of statistical significance as opposed to practical significance. The article entitled ‘Statistical Assessment of Analytical Method Transfer’ by Zhong, Lee and Tsong is also a good reference. This article discusses the choice of statistics depending on the goal of the transfer. For example, when the purpose of the assay is to determine the mean of measurements, the equivalence of the means is an important determination, while if the intent of performing the assay is to determine individual measurements such as titer determinations in clinical samples, the determination of the equivalence of individual readings between the two laboratories gains further importance’ (Zhong, Lee and Tsong). This paper describes statistical approaches for determining equivalence through the use of confidence intervals for the equivalence of means. Confidence intervals will also help determine the limits of agreement method which defines the interval within which 95 percent of the individual differences should lie, the total deviation index and the tolerance interval which all help define if differences between laboratories are statistically significant.
Another good reference for statistical definitions and use in method evaluation is the classes guide entitled "The Fitness for Purpose of Analytical Methods.” This guide also has an extensive analysis of the sources of uncertainties and analytical processes. The classes guide entitled "Selection, Use and Interpretation of Proficiency Testing Schemes by Laboratories" is also a good reference for inter laboratory proficiency testing design, execution and evaluation.
Internet sites that are particularly useful for application of statistics include:
Pre-Transfer Assessment. The work done prior to transfer is critical. This work ensures that the receiving lab is ready for transfer. Even if the transfer is group-to-group, lab-to-lab, or site-to-site, an audit function (or gap analysis) is beneficial to the process as an assessment of the current ability of the lab to accept the transfer and is also beneficial when determining what gaps (such as equipment upgrades) need to be done prior to transfer. This includes a review of the laboratory environment,the equipment (hardware and software), the temperature (in some cases), humidity levels, and the placement of the equipment (is a factor in determining the success of the transfer). Finding out during the transfer that there is a mismatch in requirements-to capabilities will produce complications and potentially alter the path of the transfer. It can be beneficial to include upfront feasibility or practice runs of the protocol to make sure that the laboratory is ready, that the operators are trained, and that the procedure is well defined and understood.
Comprehensive Document Package. Even for methods that are in the development phase, a document package consisting of a draft procedure with extensive development notes included in the text is a helpful foundation for the transfer of knowledge. With methods that are post-development, documentation is more readily available (e.g, development reports, qualification reports and technical reports).
Training. Doing the work up front to ensure laboratory operators are properly trained with a procedure that is well-written, well understood, and that contains acceptability criteria (depending on the stage of the method) is an area that is at times rushed which can cause significant delays in the process. An assessment of the method performance against expected performance is another step toward building the proper foundation for transfer.
Feasibility Runs. Feasibility runs can also serve as protection against spending time transferring a method that is not well understood or that is poorly performing. Identifying a technical lead at both the originating lab and the receiving lab is also important. Certainly the availability of material should be discussed prior to the protocol approval with qualified and approved critical reagents, qualified HPLC columns, inventory of standards and a number of lots of available samples. This often results in a change in how the transfer will be executed, for example, if a critical reagent is difficult to purchase at the receiving laboratory, the transfer may also contain a section on the appropriate performance of additional critical reagent suppliers. The number of lots available will, of course, affect the design of the comparative study and sometimes the type of statistical analysis that will be performed. A good reference on the aspects of sample planning and the effects on uncertainty, the effects of heterogeneity, analytical uncertainty, and measurement uncertainty is found in the classes guide entitled "Uncertainty from Sampling." In general the more detailed the pre-work prior to transfer, the greater the chance for an effective transfer.
Project Planning. Some of the issues with transfer are exacerbated by a lack of planning and poor communication. This can become evident with issues such as lack of material, slippage of timelines, unavailability of operators or laboratory equipment. Poor communication can become evident with issues such as the misalignment of resources; ineffective training; incomplete, poorly worded procedures and protocols; and misalignment with the original scope. Making sure that you have resources to properly plan, monitor and conclude the project is important to its success. A list of some responsibilities for both the originating and receiving lab, much like the elements of a quality agreement, can be an important tool for this process (Figure 2).
Post Transfer Monitoring. Another phase of transfer which is often overlooked is post-transfer which is simply keeping an eye on method performance. This is the monitoring phase of method and laboratory performance. When multiple laboratories are performing the same method, the performance of the method should not only be monitored or tracked by the laboratory to ensure adequate performance over time but there should also be ongoing comparisons or tracking between laboratories. Ongoing comparisons and tracking helps determine adequate performance over time and a determination of a lack of meaningful shift (or change) in bias between sites and an assurance of no significant change in variability over time. These assessments are part of the overall tracking and analysis of method performance in a quality setting.
In summary, the details of transfer depend on the stage of the method and the details of the receiving and transferring lab status. Regardless of the stage of a method, when a method is at one site and is needed in another, this is called a transfer. The specifics of this transfer will necessarily change depending on the stage of the method, the type of method, the amount of historical data available, the availability of critical materials, and often the timeline requirements combined with availability of laboratory resources. Assurance that the method between sites maintained its expected performance over time is also an important post transfer activity that should be resourced. Without some of the elements of transfer, the foundation becomes less stable, timelines and resource assessment are not met and the transfer process can become more complicated. Detail, documentation and diligence to process are keys to transfer.
Melissa J. Smith is the founder and principal consultant at MJQuality Solutions, LLC. Ms. Smith is a senior level professional with 25 years experience in quality control, quality assurance, and analytical development for biologics, drugs and devices. She has expertise in assay development, validation and transfer, auditing (ISO certified, GMP, GLP), quality control and assurance system improvements.
Ms. Smith has Bachelor's Degrees in Chemistry and Nutritional Science from Syracuse University and a Master's Degree in Biochemistry from MIT. She has a Project Management Certificate and experience in managing large analytical development-validation projects, PAI readiness projects and GLP studies.
Ms. Smith has provided analytical support during Regulatory Submissions, comparability assessments for commercial products and preclinical through Commercial Experience in the medical device and biologics industries.
Ms. Smith is a PDA Task Force member for method validation and PDA Co-Chair for Method Development and Qualification Task Force
Watch Related Videos