Search Articles

View query in Help articles search

Search Results (1 to 7 of 7 Results)

Download search results: CSV END BibTex RIS


Intersection of Performance, Interpretability, and Fairness in Neural Prototype Tree for Chest X-Ray Pathology Detection: Algorithm Development and Validation Study

Intersection of Performance, Interpretability, and Fairness in Neural Prototype Tree for Chest X-Ray Pathology Detection: Algorithm Development and Validation Study

While the NPT presents a promising innovation in integrating interpretability with deep learning capabilities, its practical utility in CXR pathology detection needs to be justified with competitive performance, particularly in comparison with nontransparent deep learning classifiers. Besides interpretability and performance, fairness is another important dimension when considering adopting deep learning–based diagnostic tools for detecting CXR pathologies [31,32].

Hongbo Chen, Myrtede Alfred, Andrew D Brown, Angela Atinga, Eldan Cohen

JMIR Form Res 2024;8:e59045

Unsupervised Feature Selection to Identify Important ICD-10 and ATC Codes for Machine Learning on a Cohort of Patients With Coronary Heart Disease: Retrospective Study

Unsupervised Feature Selection to Identify Important ICD-10 and ATC Codes for Machine Learning on a Cohort of Patients With Coronary Heart Disease: Retrospective Study

Due to interpretability concerns in the health domain, selecting from original features, rather than transforming them into new features, is an essential step in reducing dimensionality. In this study, we demonstrated that the CAE methods performed the best in selecting the most informative ICD and ATC codes in an unsupervised setting.

Peyman Ghasemi, Joon Lee

JMIR Med Inform 2024;12:e52896

Development and Validation of a Robust and Interpretable Early Triaging Support System for Patients Hospitalized With COVID-19: Predictive Algorithm Modeling and Interpretation Study

Development and Validation of a Robust and Interpretable Early Triaging Support System for Patients Hospitalized With COVID-19: Predictive Algorithm Modeling and Interpretation Study

Although there were some models with a low risk of bias, these models had low to moderate discriminative power, were based on the data from the early pandemic period, and had limited clinical interpretability [9,10]. Therefore, the development of a robust, interpretable, and generalizable model with high discriminative power is required to provide practical benefit in managing the next possible pandemic [12,13].

Sangwon Baek, Yeon joo Jeong, Yun-Hyeon Kim, Jin Young Kim, Jin Hwan Kim, Eun Young Kim, Jae-Kwang Lim, Jungok Kim, Zero Kim, Kyunga Kim, Myung Jin Chung

J Med Internet Res 2024;26:e52134

A Machine Learning Approach to Support Urgent Stroke Triage Using Administrative Data and Social Determinants of Health at Hospital Presentation: Retrospective Study

A Machine Learning Approach to Support Urgent Stroke Triage Using Administrative Data and Social Determinants of Health at Hospital Presentation: Retrospective Study

Although ML models can produce accurate predictions, they are often treated as black-box models that lack interpretability. This is an important problem, especially in medical care because clinicians are often unwilling to accept machine recommendations without clarity regarding the underlying reasoning [57]. However, according to a recent review, the number of ML studies in the medical domain that addressed explainability is very limited [58].

Min Chen, Xuan Tan, Rema Padman

J Med Internet Res 2023;25:e36477

Prediction Model of Osteonecrosis of the Femoral Head After Femoral Neck Fracture: Machine Learning–Based Development and Validation Study

Prediction Model of Osteonecrosis of the Femoral Head After Femoral Neck Fracture: Machine Learning–Based Development and Validation Study

Interpretability is defined as the ability to explain or provide meaning in understandable terms to a human. The pursuit of interpretability of the black box model helps to improve users’ trust in the machine learning model and provides support for human decision-making. Arrieta et al [29] summarized and distinguished between transparent models and those that can be interpreted by post hoc explainability techniques.

Huan Wang, Wei Wu, Chunxia Han, Jiaqi Zheng, Xinyu Cai, Shimin Chang, Junlong Shi, Nan Xu, Zisheng Ai

JMIR Med Inform 2021;9(11):e30079

Predicting Antituberculosis Drug–Induced Liver Injury Using an Interpretable Machine Learning Method: Model Development and Validation Study

Predicting Antituberculosis Drug–Induced Liver Injury Using an Interpretable Machine Learning Method: Model Development and Validation Study

Since the decision tree is a simple classifier composed of hierarchically organized dichotomous determinations, its structure also demonstrates good interpretability [48-50]. In addition, the model can deal with missing values well. When the model searches for the best candidate split criteria for tree growth, they will also assign a default direction for the missing values on those nodes [41].

Tao Zhong, Zian Zhuang, Xiaoli Dong, Ka Hing Wong, Wing Tak Wong, Jian Wang, Daihai He, Shengyuan Liu

JMIR Med Inform 2021;9(7):e29226

Improving Clinical Translation of Machine Learning Approaches Through Clinician-Tailored Visual Displays of Black Box Algorithms: Development and Validation

Improving Clinical Translation of Machine Learning Approaches Through Clinician-Tailored Visual Displays of Black Box Algorithms: Development and Validation

Although notable for their impressive predictive ability, ML black box predictions are often characterized by minimal interpretability, limiting their clinical adoption despite their promise for improving health care [1-6]. As a result, there is growing emphasis on the field of interpretable ML or explainable Artificial Intelligence to provide explanations of how models make their decisions [6-8].

Shannon Wongvibulsin, Katherine C Wu, Scott L Zeger

JMIR Med Inform 2020;8(6):e15791