Oct 30, 2012 | Free Downloads | |Share This Article
Can you use automated testing tools as an FDA-regulated company?
Software test tools help development and testing teams verify functionality, ensure both the reliability and security of the software they develop, and investigate software bugs. Off-the-shelf tools are available for all stages of software development. Examples include static code analyzers, record and replay, regression testing, and bug tracking. Some software testing tool vendors offer an integrated suite that starts with the gathering of requirements and continues through software development and testing throughout the life of a project, including supporting the live system. Other vendors concentrate on a single part of the application development life cycle, such as just testing.
Know, understand, and document your intended uses of the tool.
You only get what you want if you know what your requirements are for using the tool. Understand the native functionality of the tool when used "as is" and what needs to be configured or customized to get the performance you need. Document the tool's configurations and customizations. Review the native functions and the configurations and map them back to your intended uses to assure that you have covered all your needs.
Use code you have already tested or verified using other means. Document your testing with test cases or scripts. Document the results, including the objective evidence of the actual results that can be compared to the expected results. Do not just record that the tool worked "as expected" or "passed." Use good and poor code to test your tool. Keep those pieces of code as part of the test documentation. Do fault insertion to assure that the tool behaves properly in all the expected scenarios. Do testing in the context of your use. Understand the tool's limitations. Each tool will solve problems but also create potential issues. You must know what these are.
Once you have mapped your intended use requirements to the tool's native functions and any special configurations and/or customizations, and then documented your testing—including the objective evidence of the results—you have validated the tool for its intended uses. Now you can use it.
Once the tool is validated, monitor your use and the results of the tool. If you find that you have additional uses that were not validated, those will need to be validated prior to use. Likewise, if you find new "expected scenarios" that were unknown at the time of the original validation, additional documentation of the behavior of the tool under these new scenarios should be placed in the validation file.
One example of a commonly used code review tool is a static code analyzer. FDA recommends the use of static code analyzers in high-risk applications, especially for medical device software. Static code analyzers will find issues that you know are not associated with the basic code. Many false positive issues are found that you then have to evaluate and investigate. Static code analyzers have coding rules that they enforce. If the code does not conform to their rules, you will get many issues that you need to review and resolve. Once you have resolved them, this information on the false positives routinely identified by the tool can be used so that the next time the false error is found, in the same way by the static code analyzer, you do not need to do another investigation. Some companies change their coding standards to conform to the tool for all future development. However, because they often have built code on old code bases, they maintain a list of false errors they will ignore or not investigate fully. Static code analyzers also allow you or the tool vendor to do custom configurations. These customizations tell the analyzer to ignore some of the false errors that the tool finds but that the developers know are not issues with the code and can be ignored. One client of mine did extensive research to determine the best static code analyzer for their needs. They set up code modules with common errors, complexities, and poor coding practices as well as modules with good code, then ran several different manufacturers' static code analyzers through their tests. They chose the tool that found the most types of errors and issues. They documented the limitations of the static code analyzer so other verification methods could be used to find those errors the static code analyzer missed. They compiled a list of common false errors the tool found. They documented the code errors, their investigations into each one, the results of the investigations, and what they would do with each false error when used to analyze all their code modules.
Another commonly used tool is an automated "record and replay" testing tool. These tools allow you to record each key stroke as you are running a test manually. You then can schedule testing to be performed any time in the future and the tool will replay each key stroke as if you were doing it manually. The advantage of this approach is that the documentation you accumulate while recording—such as screen shots or other documentation—is used not only to show that the code passed the test when run manually but also used to compare the output the very first time the tool is run and demonstrates, in fact, the tool is running the manual tests correctly. To optimize testing, many of these tools allow you to modify the key strokes rather than rekeying them, insert negative testing, or change the code if a change is required in the testing steps. Modifications to the key strokes outside the record feature of the tool should be documented and independently reviewed to assure that the changes are done appropriately and correctly. The documentation of the key strokes is the code for the tool. You can have the tool continue to provide the documented evidence of the results, or you can choose after validation to only have the tool record that the tests passed or how they failed. You will need to investigate and resolve all failures.
The use of software testing tools will increase the quality of your software application by efficiently detecting functional, performance and security issues. Compared to manual testing, which may be inconsistent because of human error, testing tools also are more consistent and repeatable in following your testing procedures/processes. Many of the tools can measure software performance throughout development, testing and maintenance of the software product. Other tools can assist in conducting and documenting risk analysis and benchmarking against other software. The use of these tools can improve your time-to-market or time-to- implementation because of more frequent and consistent testing, finding, and fixing of issues and bugs. This improves product lifecycle management processes and reduces your total cost of ownership. Once you have a validated testing tool, you can use it over and over. You can be confident that the tool is giving you the information you need to demonstrate—to the FDA, your customers, and all your stakeholders—that your software is being tested consistently and effectively.
Janis Olson is vice president of quality and regulatory services at EduQuest, Inc., a global team of FDA compliance experts. Janis worked for FDA for over 22 years in various positions, including investigator and director of information technology. She has a BS in Biology from Cornell University and an MS in Computer Systems from the University of Central Florida. She currently works as a consultant with EduQuest (www.EduQuest.net) assisting companies in complying with FDA regulations and understanding computer system development and validation. Contact Janis at www.EduQuest.net.
Watch Related Videos