If you work at a regulated company, then there is a fair chance that you have heard of the term computer system validation (CSV). The issue is that a “computer system” is technically the hardware and software of a platform system. Around that you need to add application software, ancillary equipment, people and procedures to form a “computerized system.” This computerized system then runs within a company’s operating infrastructure (or on in the cloud if run within a cloud-hosted service environment).
Each of these elements within the computerized system need to be either validated or qualified.
Validation is providing evidence-based proof that a process will consistently produce a result meeting its predetermined specifications and quality attributes.
Qualification is proving that some physical entity is fit for its intended use as defined in a set of requirements.
Total automated solutions usually comprise of several distinct validation and qualification exercises that come together in a final performance qualification (PQ) or user acceptance test.
These days, very few automated solutions run independently on a standalone hardware device segregated from any other node on the network. A node is any physical device on a computer network that is able to send, receive and/or forward information.
For this reason, network infrastructure that supports the computerized system also needs to be qualified/validated. Individual components are qualified, then the entire network as a whole is validated.
In a perfect world, this would be done and maintained independently of the computerized system, and then just updated or referenced whenever a new computerized system is added or updated.
In a common real-world scenario (where your computer infrastructure is being suitably managed but not formally validated), the infrastructure components addressed or utilized as part of a project, can be separately qualified/validated as part of that defined automated system project. This is a good approach for a single project but is not a sustainable approach as more and more computerised systems require validation.
Obviously, it is better to have a known validated network infrastructure and simply reference this as each automated system project utilizes components and processes on the network.
An issue that affects the long-term validation status of any system or device is the ownership of that element. Systems or devices with no owner (or confusion over who the owner is) are more likely to either not be validated correctly or not remain in a validated state.
Each automated system or device should have both a business owner (of the process) and a system owner (of the computer system) defined; and these individuals together are responsible for maintaining the validation status of the system. (As with all auditable quality processes, quality assurance will most likely be responsible for ensuring that the above happens – but not necessarily responsible for actually doing it.)
Like a computer systems validation master plan at a company, an IT infrastructure qualification plan defines the framework to perform the initial exercise, and the ongoing work to ensure that the infrastructure remains in compliance.
Breaking the network infrastructure into platforms and defining generically how each platform will be qualified is a logical way to plan the exercise. Example platforms are:
There are some key sections in an IT infrastructure qualification plan:
Enjoying this article? You may also enjoy this White Paper:
The Business Case for Moving to the CloudDownload Free White Paper
Once the framework for an IT infrastructure qualification plan has been started (all the details need not have been finalized), individual requirements documents for each platform can be created.
Separate component design documents can exist if required or this information can be contained with a specific qualification document for a component.
A template qualification document for each type of component should be developed to assist with the qualification effort and with the collection of an agreed level of data and an agreed level of testing per component.
A risk assessment may be performed separately or embedded in the qualification specification to further restrict or enhance the level of testing required.
The intent of a network component installation qualification (IQ) is to ensure:
The level of instructions and the level of detail of configuration parameters captured in the IQ need to be commensurate with the criticality and risk of that component. The intent of a network component operational qualification (OQ) is to ensure that it has been adequately tested through positive and negative conditions in its operational ranges.
Again, the level of testing and the results captured also need to be commensurate with the criticality and risk of that component. As discussed previously, for many network components, the IQ and OQ can be combined into a single IOQ document.
For elements like cabling, it is good practice to maintain a cable diagram with suitably labelled cables and numbered/labelled ports.
Each element is qualified independently and released as part of this qualification. As a whole, the network infrastructure is then validated. Prior to infrastructure release, tests should be performed to address required:
It is interesting to note that when third parties are used for network support or cloud computing functionality that service level agreements (SLAs) are normally written to document responsibilities, response times and levels of service. But when a company’s own IT department performs the same tasks, typically no such agreement exists. Some companies do have internals agreements or key performance indicators (KPIs) but whether these exist or not, regular routine internal audits should be conducted to ensure that general support, backups, routine monitoring, etc., are being correctly performed as documented.
Using a cloud computing infrastructure removes many of the personal infrastructure qualification activities but not the responsibilities that go with these. To ensure that the activities are still assessed and performed, a risk assessment of the data center, hosting environment and hosted software is essential.
As part of this assessment, audits of the cloud infrastructure together with an SLA is recommended. Knowing that those managing and supporting the infrastructure that your data is being collected and retained on is still the responsibility of the owners of that data.
Once you have a validated network infrastructure, it is important to maintain that state. A common failure point is the lack of suitable change control procedures when updating existing network components or adding new ones.
As with all validation exercises, there is a lot of effort, training and investment made to take an unvalidated network infrastructure to a validated state (depending on the maturity and current status of the environment).
Validating the entire network infrastructure must be done in line with other company CSV initiatives. That is, if the company is very new to the CSV culture, it may be better to only validate the network components relative to a specific automated project initially and focus on the entire network qualification when the organization’s CSV culture is a little more established.
Many computer validations exercises fail or cost a lot more than they really should because the people on the team are inexperienced or don’t really understand what it is they are trying to achieve as part of the validation.
If in doubt about the current experience of individuals involved in the validation exercise, it is advisable to add some experienced personnel at key times during the project to assist with defining the framework, mentoring project personnel, and reviewing work performed.
Always remember that you are not performing validation tasks to simply satisfy auditors. You are primarily validating systems and processes to gain more understanding about them so that you maintain better control. The documentation of these efforts is to prove that you have actually performed the tasks you say you have.
The long-term benefits are both in the knowledge gained (which may be lost or forgotten over time) and the documentation of that knowledge (which may be read again to regain the knowledge).
Ian Lucas is a partner and director of SeerPharma Pty Ltd in Melbourne, Australia. He has over 30 years’ experience as a software developer and manager, and over 25 years’ experience implementing and validating manufacturing and quality management solutions for regulated industries. Ian has spoken at many international conferences and regularly trains on computer validation subjects. He can be reached at firstname.lastname@example.org. SeerPharma is a Sales and Service Partner of MasterControl.