Every day, you may receive dozens to hundreds of email messages and text messages. Some may feature a “high importance” flag to nab your attention. Imagine that you are a care manager responsible for dozens or hundreds of lives. How do you make sense of signal when alerts arrive in your inbox? Traditional risk models assign a simple “high” or ‘low” value – maybe even a quantitative score.
KenSci uses model explainability: the ability of a model to convey why it has yielded a given score. We surface the risk factors, or machine learning features, that our model has associated with a change in score at a population or per-person level. The factors are not causal; they have not caused the score. However, a user can make a more informed decision based on the factors and how they associate with a person’s risk score. In our example below, imagine that a health plan’s member with pre-diabetes has not had an encounter with his/her primary care physician in the last year. The model might determine this to be associated with an increased risk of diabetes progression, having learned from similar members and their trajectory. This explanation can assist the health plan’s care manager in initiating an outreach to schedule an annual wellness visit or investigate access issues that prevent the member from obtaining preventative care. Explanations are not the only solution to flat numeric risk scores, but they can highlight associated factors in a risk model at a time when a user needs them as part of decision-making.
Model explainability is one of our principles. It is available in all of our supervised machine learning work – those models that have a clear endpoint, such as risk of an event or condition progression. For more information on model explainability, see the tremendous corpus of work that our team has generated.
This post cites the achievements led by Dr. Muhammad Aurangzeb Ahmad (Principal Research Data Scientist), Vikas Kumar, PhD (Principal Data Scientist), and Carly Eckert, MD (Director of Clinical Informatics)
Prior publications by Dr. Muhammad Aurangzeb Ahmad
KenSci Research Paper on Imputation in Explainable AI Models
Slides from KenSci's Tutorial on Explainable Models for Healthcare at ACM 2018