background image for GxP Lifeline
GxP Lifeline

Training Does Not Stand Alone: The Quest for Training Effectiveness Continues


While training may appear to be straightforward to most leaders, the effectiveness follow up is anything but clear-cut. So why do business leaders frown when the answer to the training effectiveness question is – it depends?

Training Serves Two Masters

Providing an explanation depends on stakeholders’ expectations and their need for results. Most of the assigned training events for life science employees are compliance requirements ranging from GxP basics to SOP training and on-the-job training. As a result, regulatory investigators are vital stakeholders. They want to know what employees trained on and then they ask:

  • Was it effective?
  • How do you know?
  • What’s your process?

Their focus is primarily on individuals’ behavior after training is completed. They seek to confirm if employees are following the procedure correctly (as trained). Especially if a batch or lot is in question or there is a trend toward repeat errors. But no news is not always good news. Inspectors will also ask for an overview of the effectiveness process to ensure that it is reasonable, sustainable and not a one-off answer during an inspection. And the training function answers to business leaders whose focus is more about the impact to business goals. Executives expect to see numbers. They read financial documents daily and are always looking for the bottom line. So why is it so hard to find the one training effectiveness metric for them? Given the nature of the tasks at hand and the depth of training required, one-size training delivery does not apply across the organization. Neither does a one-type training effectiveness technique. For example, a written assessment may be appropriate for a knowledge-based classroom session but does not fit for a skills demonstration during OJT-Qualify step. “It depends” is too messy of an answer. It doesn’t fit well in a spreadsheet like training completion percentages.

What Makes it Complicated?

Let’s start with asking what’s the business problem that training is supposed to solve or improve? Take compliance training, for example. It’s a requirement and not meeting FDA expectations has some pretty dire consequences that could result in a much bigger problem. So the business goal is to meet compliance expectations and avoid the cost of quality improvement consultants or a mandatory site shut down, check mark. When does training effectiveness get determined? After the event is over. It is not the number one priority for business leaders until questioned by external regulators. Let’s ask the next question. What’s the performance gap that training is supposed to close? Often trainers (and instructional designers) are left out of this business loop and hence, they miss the executive’s target goals. After the fact reporting is like grabbing a metric out of the air or clicking [Print Report Tab] from the LMS. It’s quick and it’s numbers-based, i.e. eLearning completion rates, attendance participation and due date tracking. But this is just reporting on efficiency and operational activities. It’s not evaluating training’s effectiveness. Without alignment to a business problem or performance gap, training is a hit or a miss; an arrow without a target, which explains why many executives are disappointed with typical training results.

What About the Four Levels of Evaluation?

A popular and commonly-used evaluation model is the Kirkpatrick 4 Levels of Evaluation. Again, it depends on what type of evaluation feedback is being sought. Each level is aimed at a different stakeholder, uses a different tool/method and becomes increasingly more resource-intense and costly. The easiest to administer are course evaluations in which participants’ feedback is provided after the event is over. However, liking a program or the instructor does not translate to effective training necessarily, a drawback to this tool. There has been movement in the training industry recently to repurpose the scope and use of these forms and in time, the switch may happen. Knowledge assessments are next most popular in use and require some time to develop. As the name implies, it measures knowledge retained long enough to “pass the test” and possibly learned as long as they don’t forget what they learned a week after the event is over. The biggest drawback for this tool it does not guarantee use back on the job. In fact, a lot of the training that is successfully delivered and confirmed by knowledge assessments does not get used at all or declines over time when not practiced or routinely used. This is also known as scrap learning as described by Robert Brinkerhoeff, an industry thought leader on training evaluation (Mattox, 2010).

But when knowledge assessments are developed using good test construction principles, learning impact can be measured. We can show where knowledge and skills gained from training closed the new hire learning gap and how the improved capability impacts job performance outcomes. The real effectiveness measure for regulators is how well employees applied what they learned back on the job. I refer to this as transfer of training. Others call it behavior change. It is this connection to deviations, repeat errors and CAPA investigations metrics that FDA regulators link back to training effectiveness for compliance training. The most difficult tools to administer and least used by trainers are where executive leaders focus – business impact. Executives believe that training can be integral to achieving business results. “However, our experience shows that perhaps counter intuitively, business leaders rarely want to evaluate T & D’s impact after the fact”, (Cleary, 2017, 58). And yet, the data collection and evaluation has to occur after the training event is over in order for the results to be real and not challenged. So training effectiveness results are often an afterthought.

Here’s the Rub for Me

Executives want metrics. They want answers for their training effectiveness question. But when it’s time to set it up properly, the push back comes. Somehow, resources and budget disappear and the priority is no longer that visible. ‘We already know we need this training, so there’s no need to spend time measuring’, (Frielich, 2017, p. 43). Yet, when the GMP and SOP training effectiveness “program” is being challenged by regulatory investigators, it’s too late. The reactive scramble for metrics begins all over again.

Does Training Need a New or Different Set of Metrics?

What “metric” was used when the discrepancy was discovered? Why not use the same metrics after training has occurred? Training does not stand alone. It is the most cross-functional system in an organization. It touches everyone at some point in employees’ learning journey, from new hire orientation to curricula completions, periodic HR-sponsored updates and annual safety refreshers. Quality systems leverage training aimed at awareness training intended for CAPA corrective actions. And some companies still use the classroom setting for significant SOP revision training.

Metrics from other systems have already been established. Why add more metrics for training that point back to other systems’ metrics? Manufacturing as an industry norm has established their key performing indicators. Refer to the side bar. In most cases, the performance gap is noticed in these metrics. Use the same set of metrics after the training event has ended to measure the impact of performance improvement provided that learners have had ample time to use the training with support from management. With respect to the training function internal metrics, borrow from manufacturing and with a little creativity the following metrics can be used:

  • Cost ➜ Time to deliver, travel, related expenses
  • Productivity ➜ Efficiency, Time to Develop
  • Safety ➜ New Awareness Programs
  • Waste ➜ Scrap Learning
  • Quality ➜ End User Adoption, Successful Learning Transfer %

Leading indicators are a set of metrics that be used in the interim between formal KPI cycles. “They can offer valuable insight into the efficiency and effectiveness of learning initiatives, serve as progress milestones and act as an internal measuring stick”, (Carpenter, 2017, p.49). Consider three impact levels:

  • Learning ➜ increased capabilities, enhanced knowledge -sharing
  • Job Performance ➜ improved efficiencies, improved use of resources, higher performance levels (better yields)
  • Organizational ➜ increased engagement, greater change adaptability, lower turnover rates

Is Training the Only Reason for Transfer Failure?

So when the indicators are not trending positive does that mean training is not working? Typically the automatic conclusion is that training was ineffective. But is training the only culprit? Once again, training does not stand alone. Brinkerhoeff (2006) posits that probably as much as 80% of the barriers that get in the way of achieving intended outcomes “are not caused by flawed training interventions, they are caused by contextual and performance system factors that were not aligned with and were otherwise at odds with the intended performance outcomes of the training. Thus when we evaluate “training” impact, we are most often in reality evaluating an organization’s performance management system” (p.23). He has observed that a lack of analyzing, measuring or seeking feedback on what’s working and not working with the “training as delivered” is often a contributing factor for business impact failure. It’s easier to criticize the training event than it is to ask managers to coach their employees post-training. “Senior and supervisory management own the many performance system factors that threaten results” (p.41). It is the performance of these systems that need to be evaluated for behavior change / transfer and not the training event (Brinkerhoeff, 2006).

The Answer Really Is It Depends, Doesn’t It?

Inviting instructional designer(s) to the planning meeting to discuss the critical job behaviors / performance outcomes that need improvement, as well as identifying the key connections to other systems, is the first step in linking learning impact (training effectiveness) across organization goals, performance outcomes and learning capabilities. Leveraging existing metrics from these systems can be extrapolated for either regulatory stakeholders or business leaders. The bonus is that it frees up resources to succeed on measuring training effectiveness for the really important training initiatives. So, the bottom line reply is, who needs to know what and how will they use the results?

What kind of metrics do you use for training effectiveness? Has your management asked for any kind of metrics for the training you do? Please comment below.


2016-nl-bl-author-vivian-bringslimark

Vivian Bringslimark has 29 years of education, life sciences Industry experience and consulting engagements enabling her to provide human performance consulting services for improving people strategies. Her two core competencies are instructional systems design with a focus on adult learning principles and integrating training solutions with key performance and quality systems. Vivian holds a M.A. in adult education from Teachers College Columbia University and a M.S. in educational computing from IONA College.   She currently serves as an advisor to the Board of Directors for the GMP Training and Education Association and is an active mentor for SE FL Chapter of ATD. She can be reached at vbringslimark@hpisconsulting.com.


[ { "key": "fid#1", "value": ["GxP Lifeline Blog"] } ]