The branch of machine learning focused on giving explanations of machine learning predictions to humans
To make sure that machine learning models are transparent so that potentially harmful decisions are not taken by machine learning systems.
Determining that the machine learning systems are bias free i.e., not discriminating against people on the basis of socio-economic or demographic characteristics.
Ensure that machine learning systems are compliant with regulations and are ethically sound in the most acceptable manner.
Going beyond the artificial to enhance the human decision-making capability with Machine Learning aided assistance.
KenSci was named as a Cool Vendor based on a Gartner Inc. report titled “Cool Vendors in Enterprise AI Governance and Ethical Response” by Jim Hare, Van Baker, Svetlana Sicular, Saniye Alaybeyi, Erick Brethenoux, and Alys Woodward. The report illustrated research that profiles five emerging vendors in the data and analytics market. “These vendors help organizations better govern their AI solutions, and make them more transparent and explainable,” Gartner stated.
One of the fears of enabling AI to make decisions in regulated industries is that the AI models will be black box and decision-making logic and process will be protected or hidden from end-users to protect intellectual property or fear of exploitation of the algorithms. This is contrary to the assistive nature of AI and there is an increasing awareness that “Transparency in AI” will enhance trust in AI implementations to improve operations. In this context transparency is a proxy for several characteristics of the AI model including but not limited to (a) modeling technique and parameters used in training, (b) reproducibility of the end-to-end process, and (c) explanability of the results produced by the model.
Explainable models in Artificial Intelligence are often employed to ensure transparency and accountability of AI systems. The fidelity of the explanations are dependent upon the algorithms used as well as on the fidelity of the data. Many real-world datasets have missing values that can greatly influence explanation fidelity. The standard way to deal with such scenarios is imputation. This can, however, lead to situations where the imputed values may correspond to a setting which refer to counterfactuals. Acting on explanations from AI models with imputed values may lead to unsafe outcomes. In this paper, we explore different settings where AI models with imputation can be problematic and describe ways to address such scenarios.
With the increasing ubiquity of machine learning in healthcare, there are increasing calls for machine learning and AI-based systems to be regulated and held accountable in healthcare. Interpretable machine learning models can be used for accountability in machine learning. Healthcare offers unique challenges for machine learning where the demands for explainability, model fidelity, and performance, in general, are much higher as compared to most other domains. The time is not to address the notion of interpretability within the context of healthcare, the various nuances associated with it, challenges related to interpretability which are unique to healthcare and the future of interpretability in healthcare.
The drive towards greater penetration of machine learning in healthcare is being accompanied by increased calls for machine learning and AI based systems to be regulated and held accountable in healthcare. Explainable machine learning models can be instrumental in holding machine learning systems accountable.
Healthcare offers unique challenges for machine learning where the demands for explainability, model fidelity and performance in general are much higher as compared to most other domains.
While healthcare ML has demonstrated significant value, one pivotal impediment relates to the black box nature, or opacity, of many machine learning algorithms. Explainable models help move away from the black box nature.
Regulations on machine learning for requiring explanations (GDPR) means that explainability will become a defect requirement.
Speed up adoption of machine learning systems in healthcare by making them more trustable for medical practitioners.
Optimization of machine learning systems by allowing humans to provide feedback into incorrect predictions.
It isn’t just about the result, it is about how they got to the result and the journey that led to the insight.
If there is one thing that can make Artificial Intelligence truly Assistive in healthcare and less robotic it remains the capability to explain the predictions AI makes, so clinicians can trust the algorithm. With Data as the new Fuel and AI as the new electricity (energy), the ability to generate explanations at scale personalized to individual (patients) is the new power-grid: the backbone on which we build the trust in mass consumption of AI and ML.
Research, blogs, and other whitepapers on AI Led healthcare transformation