We are a group of clinicians, research scientists, engineers, and product leaders committed in expanding the applications of Machine Learning in healthcare. We focus on exploring diverse elements of AI in healthcare to build toward secure, explainable, fair, and responsible outcomes.
We work closely with our product and engineering leaders to integrate and expand on our product capabilities through applications of responsible AI, recommendation systems, phenotyping, imaging, time series, and sequence models in healthcare.
September 02, 2021
KenSci has released an open source tool (fairMLHealth) including tutorials and videos to assist in fair and equitable design and outcomes for healthcare ML
#fairmlhealth #responsibleai
June 23, 2021
How KenSci is meeting the demands of a more trustable and accountable AI in using six distinct pillars of explainability, fairness, robustness, privacy, security and transparency
#responsibleai #pillars
August 27, 2020
Responsible ML in healthcare drives adoption of ML and fairness imbedded in healthcare AI/ML tool developments. This tutorial is motivated by the need to comprehensively study fairness in applied ML in healthcare
#fairmlhealth #responsibleai
July 15, 2021
Using recommendation system methods, KenSci is reducing the overhead for primary care physicians in identifying the missing diagnosis codes for member care services
#suspectdx #scanhealth
KenSci's research team helps to define and establish the standards and safeguards for responsible and efficient adoption of AI in healthcare.
View all PublicationsMeeting the demands of more trustable & accountable AI
Suppose a model predicts a patient has a high risk of dying within the next three months but, the physician disagrees with this assessment. In this case, the physician would need to know why the model has predicted to inform action.
What if the model has substantially lower predictive performance for minority or vulnerable patients? Fair ML models are needed to ensure equal treatment of various populations.
If a model is built based on data for population in New York, how will it fair for the African American population in Alabama? Data quality may be different and insufficient data may have been collected. This model would need testing for robustness across cohorts.
AI models can be used to make inferences about a patient which may be harmful if disclosed publicly. Model inversion would allow a malicious entity to infer the values of sensitive attributes, like a rare disease or disability. This may in turn be used to discriminate against the patient by other entities.
Data for some models may come from multiple sources. If a malicious person gets access to one of the sources, they can influence how the model gets trained, resulting in incorrect prediction and potential harm. Thus, the security of AI systems in paramount.
Suppose a patient is sent to hospice care but the patient survives for more than a year. In this case, the AI system, corresponding pipeline, and infrastructure should be auditable. Accountability ensures that systems are responsible and can be improved in the future.
Presented at leading forums around the globe