background image for GxP Lifeline
GxP Lifeline

What Does Risk-Based Monitoring Mean for QA Auditing?


By now, we all know that risk-based monitoring (RBM) isn't just about changing the role of the clinical research associate (CRA); it's transforming the way clinical studies are managed.  So what does that mean for quality assurance (QA) teams who audit these new processes? Polaris president Celine Clive led a roundtable discussion about RBM and its implications for auditing at November's North Carolina Regulatory Affairs Forum (NCRAF) meeting.

It's a slippery subject. Traditional monitoring relied on the gold standard of on-site visits every 4-8 weeks and 100% source data verification (SDV). RBM is not replacing this standard with another. RBM is a framework for customizing a monitoring approach for each study, and guess what -- “custom” is a lot trickier to audit than “standard.”

So, did the NCRAF roundtable develop a definitive set of industry procedures for auditing RBM studies? C'mon, you wouldn't believe me if I said it had. After all, RBM has been studied, discussed, and debated for years, while conversations about RBM QA are just getting started. It's a good start though, if the NCRAF roundtable is any indication. Here are some of the highlights of the roundtable, as well as some internal Polaris discussions before and after the event.

The Human Element

We humans are a wonderful lot, a miracle of evolution really, but when Alexander Pope penned “to err is human,” he knew what he was talking about. To the extent that humans are involved in a process, there is going to be risk of human error. Why are we still surprised that retraining, a.k.a. the duct tape of the CAPA world, doesn't resolve every issue? When human error is the cause, retraining is rarely the solution.

Remember, we're the same cabbage heads who can't be

relied upon to extract cash from an ATM without incident. Who among us hasn't left a debit card behind? (Surely we knew that juggling sun glasses, car keys, a Big Gulp, and a wad of fresh tens was ill-advised.) But just because we're fallible doesn't mean we can't be clever. The clever among us reprogrammed those machines to sound an alert whenever they sensed we were in danger of walking off without our card. In fact, many ATMs now won't even allow you to begin a transaction until you remove your card.

Automating tasks that humans easily fumble, and sounding alerts when humans fumble those tasks that we do retain, can reduce risk. This is a core tenet of RBM. Computers are much better at processing large amounts of data, examining values, and finding outliers than we are. So let the computers have at it, while we concentrate on anticipating, detecting, and fixing higher-order problems. Because RBM implementations rely heavily on computer systems to reduce human error and enable remote activities, there most certainly will be a corresponding emphasis on validation auditing.

Different Risk for Different Roles

Everyone in an organization manages some type of risk, but which specific risks is a function of his or her role in the company. Various teams concern themselves with the risk to patient safety, to data integrity, to study cost, to regulatory approval, to endpoint definition. Are adverse events (AEs) going unreported, are computer systems vulnerable, is enrollment flagging, are sites following the investigational plan, are endpoints too narrow? So, what new risks arise for QA auditors of RBM studies, and are any of them unique to the role?

Meta-Monitoring (!)

The FDA final guidance on RBM suggests the Monitoring Plan might include “planned audits of monitoring to ensure that sponsor and CRO staff conduct monitoring activities in accordance with the Monitoring Plan...”

OK. Make sure actual monitoring follows the plan. Check.

In the very next line, FDA adds, “Auditing is a quality assurance tool that can be used to evaluate the effectiveness of monitoring to ensure human subject protection and data integrity.”

Is the monitoring effective? That's very different from determining whether everybody is following the plan. Have we captured all of the relevant Key Risk Indicators (KRI)? Is CRA intervention timed appropriately, or are the thresholds we've chosen triggering on-site visits that are premature or overdue?

In an RBM approach, since each Monitoring Plan* is customized for its study, feedback mechanisms that continually evaluate the effectiveness of the plan are critical. That's right – we have to monitor the Monitoring Plan, continually assessing what is working well and what isn't. Processes we identify as ineffective need to be changed, and the changes themselves subsequently re-evaluated. The final RBM guidance makes clear that FDA considers auditing a component of that feedback mechanism.

The Monitor/Auditor Synergy

Beyond ensuring adequate monitoring, auditing has always complemented monitoring. While CRAs have focused on the data and daily activities of individual sites, auditors have looked at systems and procedures at both a site- and study-wide level. A CRA may flag an issue at a site and suggest remediation actions. An auditor can help determine whether remediation has been implemented effectively and whether other sites could also benefit from it. Monitors and auditors together ensure a study is operating in a state of control. Even as RBM transforms monitoring (and study management, in general), QA auditing will still play that symbiotic role. As a CRA, how are you managing risk? What has worked for you? Please comment below.

Special thanks to Clare Matti and Irene Rockwell for participating in and capturing the NCRAF roundtable discussion points.

*Use of the term Monitoring Plan is not intended to exclude other study plans, such as the Data Management Plan, that are relevant to the study management process.


Laurie Meehan is the social media manager for Polaris Compliance Consultants. She writes the company blog and eNewsletter, manages the company website, interacts with clients and colleagues on social media platforms and manages the company’s SOPs and internal training. Prior to joining Polaris in 2008, Meehan worked at a major telecommunication R&D company where she provided consulting and training on telecom services, and spoke at numerous industry forums. She holds a bachelor's degree in computer science from La Salle University and master's degree in computer science from Drexel University.


Free Resource
MasterControl Audit™

Enjoying this blog? Learn More.

MasterControl Audit™

Download Now
[ { "key": "fid#1", "value": ["GxP Lifeline Blog"] } ]