Editor’s note: This is part one in a two-part series. The second installment can be found here.
What do these different products have in common?
These products all claim to use Artificial Intelligence/Machine Learning (AI/ML) software technology to solve medical challenges. They are all medical devices on the market that have been cleared by the U.S. Food and Drug Administration (FDA) using the 510(k) or DeNovo process. They all were cleared prior to the FDA’s proposed and, more importantly, yet-to-be released regulatory framework for AI/ML, which outlines an approach for control of post-release software modifications, though specifically for software as a medical device (SaMD). In fact, there are over 60 medical devices that claim to use AI/ML algorithms that have been cleared by the FDA at this time according to The Medical Futurist(1). And more are coming.
It is clear that the healthcare industry has been steadily transforming into a digital health model. With the conversion to digital health, vast amounts of data are generated and AI/ML has the potential to transform the state of our healthcare system utilizing the data in new and novel ways. A few compelling examples include:
The public has seen and recognizes the benefits of AI/ML in their personal lives such as day-to-day productivity improved by well-connected devices and services, recommender systems for product, virtual assistants, etc. There is great hype around AI/ML technologies, which has been fed by media and great claims by investors. Now manufacturers and healthcare organizations are out there identifying areas where AI/ML can improve their products or processes. These improvements are fueled by the large potential benefits of AI/ML, and organizations hope to increase earnings through the use of tools and support products which make development easier and improve time-to-market.
Given the potential benefits of this still-developing technology to transform healthcare, the foremost goal is creating a regulatory framework that enables and encourages safe and effective use of AL/ML within medical devices. There are some important considerations that regulatory bodies and standards developers are studying for use with AI/ML algorithms as compared to traditional algorithm methods.
One key consideration is that traditional algorithm systems use deterministic logic. This includes use of physics/chemistry based first principles calculations, fixed limits, strict decision trees, etc. In other words, deterministic algorithms aren’t flexible. On the other hand, an AI/ML based system uses data driven logic in that the algorithms are “trained” based on presented data inputs. This type of algorithm has the distinct advantage of being able to handle complex inputs where deterministic logic might struggle. In the age of big data, that is woefully impractical for SaMD because most human biological data is complex in nature.
As an example, a traditional deterministic algorithm for an inline blood detection sensor may have a look-up table for blood/no-blood output based on measured flow rates and measured opacity. The algorithm would have crisp edges for detection which are repeatable and easily understandable. By comparison, an ML version of this system would be presented with measurement data (e.g. flow rate , opacity, etc.) as well as the answer (blood/no-blood for that measurement data) and “learns” by creating a data-driven mapping between the input measurement data and the appropriate answer.
Even with this simple example, questions arise for the AI/ML system based on the differences between the deterministic and data-driven approaches.
Verification – What is the quality of the data used during learning? What is the accuracy of the ML engine for the tested operating space (the range of operating conditions in which the algorithm is designed to function)?
Validation – Does the learned operation cover the full expected user operating space?
Explainability – How does a user understand the presented results? Does this lead to doubt in the results that could result in usability risks?
Bias – Are there inherent biases in the data set used to train that will affect the accuracy of the AL/ML engine? For instance, was the device trained on certain gender or racial makeup that is not representative of the expected operating space?
Adaptability – Are risks created when new data outside of the expected operating space is presented for analysis?
Planning – Does the manufacturer expect to utilize continuous learning where they will update the algorithm based on real-world data? From a regulatory perspective does an update change the intended use, change expected users, change identified risks, or present new risks?
With these numerous considerations, regulatory guidance is clearly needed. The second part of this series will cover a brief history of the regulatory guidance around AI for SaMD, the status of the FDA’s proposed framework, and explain what’s in store for the future.
Enjoying this blog? Learn More.
Medical Device Trends in 2021Download Now