Understanding Interpretable Machine Learning

(or Explainable AI)

The branch of machine learning focused on giving explanations of machine learning predictions to humans

Ensuring Transparency

To make sure that machine learning models are transparent so that potentially harmful decisions are not taken by machine learning systems.

Providing Fairness

Determining that the machine learning systems are bias free i.e., not discriminating against people on the basis of socio-economic or demographic characteristics.

Building Compliance

Ensure that machine learning systems are compliant with regulations and are ethically sound in the most acceptable manner.

Assistive Intelligence

Going beyond the artificial to enhance the human decision-making capability with Machine Learning aided assistance.

KenSci named a Cool Vendor by Gartner in Enterprise AI Governance and Ethical Response

KenSci was named as a Cool Vendor based on a Gartner Inc. report titled “Cool Vendors in Enterprise AI Governance and Ethical Response” by Jim Hare, Van Baker, Svetlana Sicular, Saniye Alaybeyi, Erick Brethenoux, and Alys Woodward. The report illustrated research that profiles five emerging vendors in the data and analytics market. “These vendors help organizations better govern their AI solutions, and make them more transparent and explainable,” Gartner stated.

Interpretability and Transparency

One of the fears of enabling AI to make decisions in regulated industries is that the AI models will be black box and decision-making logic and process will be protected or hidden from end-users to protect intellectual property or fear of exploitation of the algorithms. This is contrary to the assistive nature of AI and there is an increasing awareness that “Transparency in AI” will enhance trust in AI implementations to improve operations. In this context transparency is a proxy for several characteristics of the AI model including but not limited to (a) modeling technique and parameters used in training, (b) reproducibility of the end-to-end process, and (c) explanability of the results produced by the model.

Imputation Challenges in Explainable AI

Explainable models in Artificial Intelligence are often employed to ensure transparency and accountability of AI systems. The fidelity of the explanations are dependent upon the algorithms used as well as on the fidelity of the data. Many real-world datasets have missing values that can greatly influence explanation fidelity. The standard way to deal with such scenarios is imputation. This can, however, lead to situations where the imputed values may correspond to a setting which refer to counterfactuals. Acting on explanations from AI models with imputed values may lead to unsafe outcomes. In this paper, we explore different settings where AI models with imputation can be problematic and describe ways to address such scenarios.

The Need for Explainable ML

With the increasing ubiquity of machine learning in healthcare, there are increasing calls for machine learning and AI-based systems to be regulated and held accountable in healthcare. Interpretable machine learning models can be used for accountability in machine learning. Healthcare offers unique challenges for machine learning where the demands for explainability, model fidelity, and performance, in general, are much higher as compared to most other domains. The time is not to address the notion of interpretability within the context of healthcare, the various nuances associated with it, challenges related to interpretability which are unique to healthcare and the future of interpretability in healthcare.

Watch KenSci’s tutorial on Explainable Models for Healthcare AI

Holding Machine Learning Systems Accountable

The drive towards greater penetration of machine learning in healthcare is being accompanied by increased calls for machine learning and AI based systems to be regulated and held accountable in healthcare. Explainable machine learning models can be instrumental in holding machine learning systems accountable.

Healthcare offers unique challenges for machine learning where the demands for explainability, model fidelity and performance in general are much higher as compared to most other domains.

Download our tutorial deck

Real world applications

Why healthcare needs Explainable AI models

While healthcare ML has demonstrated significant value, one pivotal impediment relates to the black box nature, or opacity, of many machine learning algorithms. Explainable models help move away from the black box nature.

Compliances

Regulations on machine learning for requiring explanations (GDPR) means that explainability will become a defect requirement.

Trusted

Speed up adoption of machine learning systems in healthcare by making them more trustable for medical practitioners.

Conversational

Optimization of machine learning systems by allowing humans to provide feedback into incorrect predictions.

Assistive

It isn’t just about the result, it is about how they got to the result and the journey that led to the insight.

Ankur

If there is one thing that can make Artificial Intelligence truly Assistive in healthcare and less robotic it remains the capability to explain the predictions AI makes, so clinicians can trust the algorithm. With Data as the new Fuel and AI as the new electricity (energy), the ability to generate explanations at scale personalized to individual (patients) is the new power-grid: the backbone on which we build the trust in mass consumption of AI and ML.

Ankur Teredesai, Co-founder & CTO, KenSci
Talk to us

Access Our Latest Thinking

Research, blogs, and other whitepapers on AI Led healthcare transformation

Case Study

See how Advocate Aurora Health is working on fighting the opioid battle

May 13, 2020

Blog

How to accelerate the movement of data onto Azure in FHIR format

March 17, 2020

Webinar

SLUHN is fighting COVID-19 by staying ahead with a real-time command center

September 4, 2020

PodCast

HIT Like a girl~ Health from Corinne Stroum on Healthcare Informatics

August 12, 2020

Press Release

SCAN Health and KenSci are leveraging AI to help the senior members

August 11, 2020