GxP Lifeline

Small Regulatory Steps for AI in SaMD


The potential of artificial intelligence (AI) in health care is staggering. When it comes to medical devices, the most popular subset of AI is machine learning (ML), where an algorithm “learns” based on inputs. A good example of this is recently approved software as a medical device (SaMD) that helps doctors identify prostate cancer. (1) The keyword here is “helps.” Uses of AI/ML in medical device are primarily in the role of assisting practitioners. Regulators seem hesitant to hand over complete control of a person’s health to an algorithm.

They also seem hesitant to come out with regulations about SaMD with AI/ML. To be fair, the U.S. Food and Drug Administration has an “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan” and they’re in the process of gathering feedback from stakeholders. In the meantime, the FDA, Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) have released guiding principles for good machine learning practice in medical devices. (2) These principles, and other regulatory documents, can give medical device companies an indication of what regulations might look like.

Importance of Data and Data Integrity

As far as the above-mentioned principles are concerned, a primary concern relates to data sets. One of the principles specifically calls for ensuring the data sets are representative of the intended patient population. Data accuracy depends on basic data integrity principles. Data is the foundation of accurate AI because that is how the algorithm is trained. In theory, data should be impartial and not susceptible to some of the human weaknesses that can corrupt other forms of analysis. Unfortunately, that’s not always the case in practice and when human bias does corrupt AI, it undermines the accuracy of the algorithm.

Combating bias in data is a vital part of ensuring accurate AI. This is especially important in medical device, or any other application that effects health. Part of eliminating bias involves using a diverse team to develop the AI. Microsoft National Director, U.S. Provider Industry and Chief Nursing Officer Molly K. McCarthy, MBA, BSN, RN-BC suggested, “To promote fairness and inclusivity, engineers, developers, and coders should not only practice inclusive behavior, but also come from diverse backgrounds with a varied set of experiences.” (3)

Importance of Humans

Another theme from the guiding principles is what the regulators refer to as “the Human-AI team.” How the algorithm performs in isolation is one thing — how it performs while being used by a person in real-world conditions isn’t necessarily the same. Beyond testing in real-world conditions, humans will still be the ultimate decision-makers when it comes to AI. AI-enabled SaMD will not make the decision for the doctors but will be another tool that physicians can use.

It’s too soon for AI to make medical decisions without human involvement. That time may never come due to the constantly changing nature of AI and regulatory requirements for change control. The FDA’s “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan” (4) talks about a predetermined change control plan. This still theoretical requirement would allow anticipated modifications in a controlled manner. The idea behind this plan is that the AI could improve and grow but the FDA would still have assurance of safety and effectiveness. Balancing the potential of AI with its inherent risk is something that regulators are still trying to figure out.


AI is making its mark in every sector, but it’s creating new categories of medical devices that will keep coming to market and improving health care. While this is beneficial, it has its own set of challenges. Medical device companies that want to take advantage of this opportunity need to know more about how to handle AI in their devices responsibly and what regulators are expecting.

AI-enabled SaMD isn’t the only disruptive trend in medical devices. To learn more about this trend and read about others, download our trend brief.


  1. FDA Authorizes Software that Can Help Identify Prostate Cancer,” U.S. Food and Drug Administration, Sept. 21, 2021.
  2. Good Machine Learning Practice for Medical Device Development: Guiding Principles,” U.S. Food and Drug Administration, Oct. 27, 2021.
  3. Artificial Intelligence in Health: Ethical Considerations for Research and Practice,” Molly K. McCarthy, HIMSS, June 17, 2019.
  4. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan” U.S. Food and Drug Administration, Jan. 2021.


Sarah Beale is a content marketing specialist at MasterControl in Salt Lake City, where she writes white papers, web pages, and is a frequent contributor to the company’s blog, GxP Lifeline. Beale has been writing about the life sciences and health care for over five years. Prior to joining MasterControl she worked for a nutraceutical company in Salt Lake City and before that she worked for a third-party health care administrator in Chicago. She has a bachelor’s degree in English from Brigham Young University and a master’s degree in business administration from DeVry University.

Free Resource
4 Medical Device Trends Impacting Quality

Enjoying this blog? Learn More.

4 Medical Device Trends Impacting Quality

Download Now
[ { "key": "fid#1", "value": ["GxP Lifeline Blog"] } ]