background image for GxP Lifeline
GxP Lifeline

The Future of Responsible AI in Life Sciences


2021-bl-rajesh-talpade-_715x320.jpg

In life sciences, we are rapidly approaching a time when the use of artificial intelligence and machine learning (AI/ML) will be an industry norm.

Just last year, an international team demonstrated the AI system they developed is capable of detecting breast cancer, and in some instances it outperformed medical experts.1 The technology is not yet ready for clinical use, but its success illustrates the potential for AI to elevate our work as we strive to improve and save lives.

The impact of AI will be enormous, and the only thing that looms larger is our obligation to be responsible about our approach to AI/ML.

Like the many health care practitioners who vow to do no harm, life sciences technical professionals must embrace equally lofty standards for AI/ML. As we set the bar, it must be higher than in other sectors – lives are depending on it.

Form Trust With Responsible AI

Historically people have a tendency to mistrust new technology, and concerns about it rise alongside increased complexity. While the challenges of helping people to trust and adopt new technology are not unique, the intricacies of AI and the implications of trusting it with a wealth of data to inform its learning process are uncharted territory. Accenture reported the most common worries related to AI include:

  • Workforce displacement
  • Loss of privacy
  • Potential biases in decision-making
  • Lack of control over automated systems and robots2

Responsible AI provides the building blocks for a foundation of trust, without which AI will never see widespread adoption. Because trust carries such a great deal of weight, technology leaders including Google and Microsoft have created responsible AI models to help guide organizations through effective methods to address the concerns mentioned above. While there is not yet a global standard, most responsible AI models share qualities similar to those discussed here, and all aim to assuage fear by creating a deliberate framework that is human-centric, private, unbiased, and transparent.

Human-Centric: A common misconception is that AI could replace people, but they will continue to have a critical role to play. As one example, Accenture’s responsible AI model calls for humans to monitor the performance of algorithms to safeguard against numerous problems such as bias and unintentional consequences. 3

Private: Data is necessary for effective ML, but individual privacy can never be compromised. In the life sciences, we often handle sensitive data and our unwavering commitment to privacy and security must hold fast.

Unbiased: AI that draws on a biased data source will reach biased conclusions, and making decisions based on skewed data can be particularly dangerous in our field. PwC notes that a component of responsible AI is being more aware of bias and taking corrective action to improve a system’s decision-making. 4

Transparent: Mistrust of technology can stem from not understanding how it operates, which is why an individual or the tool itself needs to be able to explain results and how a particular conclusion was reached. The Institute for Ethical AI & Machine Learning encourages people to develop tools “to continuously improve transparency and explainability of machine learning models where reasonable.” 5

Ultimately, these standards will be defined and described by regulatory bodies. At the moment, however, it is advantageous for the industry to establish its own agreed upon framework and definitions of common terms within the model to help inform the regulations that will be put forth.

The Path to Enforcing Responsible AI

Global standards for responsible AI are a foregone conclusion. The remaining questions relate to scope, timing, and which regulatory body or bodies will issue guidance that compels the rest of the world to follow suit. Regarding the latter, current frontrunners are the U.S. and the European Union (EU).

In 2021 alone, both the U.S. and EU made significant strides:

  • January 12 – The U.S. Food and Drug Administration (FDA) published their first “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.” 6
  • March 31 – As reported by Harvard Business Review (HBR), the “five largest federal financial regulators in the United States released a request for information on how banks use AI, signaling new guidance is coming for the financial sector.” 7
  • April 19 – The U.S. Federal Trade Commission (FTC) released what the HBR article described as “an uncharacteristically bold set of guidelines on truth, fairness, and equity in AI.”
  • April 21 – The European Commission issued a proposal for AI regulations.

The European Commission’s legal framework is the first of its kind and calls for a risk-based approach to AI, and announcement of its release included a statement explaining they are seeking to establish global norms.8  With the creation of the General Data Protection Regulation (GDPR), the EU set the bar for data security, and it’s possible they could deliver a repeat performance in responsible AI.

Their proposal has received mixed reactions. Brookings noted that portions of the framework are sound, but some topics such as the fairness of algorithms are not given adequate attention, and the general consensus in Silicon Valley is that emerging technology should not be regulated. 9  National Security Advisor of the U.S. Jake Sullivan expressed his support via social media and tweeted, “The United States welcomes the EU’s new initiatives on artificial intelligence. We will work with our friends and allies to foster trustworthy AI that reflects our shared values and commitment to protecting the rights and dignity of all our citizens.” 10

Before the proposal becomes law, European Parliament and member states need to provide their input, and if GDPR is any indication, rollout of AI regulations will be a lengthy process. GDPR was proposed in 2012, approved by parliament four years later, and became law in 2018. 11

As international guidance continues to take shape, numerous countries including Canada, France, Russia, and China have established their own regulations or standards. The U.S. is looking to do the same through a draft memorandum titled “Guidance for Regulation of Artificial Intelligence Applications,” which was issued in 2019 and comments were requested the following year. 12  Given the current pace, another iteration of the guidance can be expected soon.

Conclusion

Being on the cusp of transformation offers a unique vantage point from which to view our immediate and future needs. If responsible AI is to succeed, our immediate goal must be to continue developing reasonable regulatory guidance that is informed by the industry. Once established, the framework for responsible AI and regulations must be allowed to evolve with advancements in technology, thus securing ongoing success.

In the life sciences sector, one of the more pressing needs for long-term success is that we come to an agreement today – the parameters in place for responsible AI at any given time should only serve as a starting point. The nature of work demands that we uphold higher standards.

For instance, we need to remain vigilant of the positive and negative consequences that result from choices made on the basis of AI. This requires us to develop novel approaches to validating and verifying AI-driven choices, and ensuring the data models on which these decisions are made are superior in quality. Only high ideals can ensure AI will be effective in elevating our work to improve and save lives.


Sources:

  1. International evaluation of an AI system for breast cancer screening, S.M. McKinney, M. Sieniek, et al., Nature, January 1, 2020
  2. Responsible AI: A Framework for Building Trust in Your AI Solution, Dominic Delmolino and Mimi Whitehouse, Accenture 2018.
  3. Supra note 2.
  4. PwC’s Responsible AI, PwC.
  5. The Responsible Machine Learning Principles, The Institute for Ethical AI & Machine Learning.
  6. FDA Releases Artificial Intelligence/Machine Learning Action Plan, The U.S. Food and Drug Administration, January 12, 2021.
  7. New AI Regulations Are Coming. Is Your Organization Ready?, Andrew Burt, Harvard Business Review, April 30, 2021.
  8. Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence, European Commission, April 21, 2021.
  9. Machines learn that Brussels writes the rules: The EU’s new AI regulation, Mark MacCarthy and Kenneth Propp, Brookings, May 4, 2021.
  10. The United States welcomes the EU’s new initiatives…, Jake Sullivan, Twitter, April 21, 2021.
  11. What is GDPR? The summary guide to GDPR compliance in the UK, Matt Burgess, Wired, March 24, 2020.
  12. Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, “Guidance for Regulation of Artificial Intelligence Applications, Federal Register, January 13, 2020

2021-bl-rajesh-talpade-_132x132

Rajesh Talpade is Senior Vice President of Product at MasterControl. He is responsible for Product Management and Design to enable global life sciences enterprise companies bring life-changing products to more people sooner. Prior to MasterControl, Talpade served as Vice President of Product at Clarifai, an artificial intelligence company that leverages machine learning and deep neural networks to identify and analyze images and videos. Talpade also previously worked at Google for close to six years, where he worked on Mobile Ad Products, Content Delivery Network and Network Management Products for the largest global IP network, which all relied on Google’s ML expertise to improve the value delivered to customers.


[ { "key": "fid#1", "value": ["GxP Lifeline Blog"] } ]