e.g. mhealth
Search Results (1 to 7 of 7 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 4 JMIR Medical Informatics
- 2 Journal of Medical Internet Research
- 1 JMIR Formative Research
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Research Protocols
- 0 JMIR Human Factors
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

While the NPT presents a promising innovation in integrating interpretability with deep learning capabilities, its practical utility in CXR pathology detection needs to be justified with competitive performance, particularly in comparison with nontransparent deep learning classifiers.
Besides interpretability and performance, fairness is another important dimension when considering adopting deep learning–based diagnostic tools for detecting CXR pathologies [31,32].
JMIR Form Res 2024;8:e59045
Download Citation: END BibTex RIS

Due to interpretability concerns in the health domain, selecting from original features, rather than transforming them into new features, is an essential step in reducing dimensionality. In this study, we demonstrated that the CAE methods performed the best in selecting the most informative ICD and ATC codes in an unsupervised setting.
JMIR Med Inform 2024;12:e52896
Download Citation: END BibTex RIS

Although there were some models with a low risk of bias, these models had low to moderate discriminative power, were based on the data from the early pandemic period, and had limited clinical interpretability [9,10]. Therefore, the development of a robust, interpretable, and generalizable model with high discriminative power is required to provide practical benefit in managing the next possible pandemic [12,13].
J Med Internet Res 2024;26:e52134
Download Citation: END BibTex RIS

Although ML models can produce accurate predictions, they are often treated as black-box models that lack interpretability. This is an important problem, especially in medical care because clinicians are often unwilling to accept machine recommendations without clarity regarding the underlying reasoning [57]. However, according to a recent review, the number of ML studies in the medical domain that addressed explainability is very limited [58].
J Med Internet Res 2023;25:e36477
Download Citation: END BibTex RIS

Interpretability is defined as the ability to explain or provide meaning in understandable terms to a human. The pursuit of interpretability of the black box model helps to improve users’ trust in the machine learning model and provides support for human decision-making. Arrieta et al [29] summarized and distinguished between transparent models and those that can be interpreted by post hoc explainability techniques.
JMIR Med Inform 2021;9(11):e30079
Download Citation: END BibTex RIS

Since the decision tree is a simple classifier composed of hierarchically organized dichotomous determinations, its structure also demonstrates good interpretability [48-50]. In addition, the model can deal with missing values well. When the model searches for the best candidate split criteria for tree growth, they will also assign a default direction for the missing values on those nodes [41].
JMIR Med Inform 2021;9(7):e29226
Download Citation: END BibTex RIS

Although notable for their impressive predictive ability, ML black box predictions are often characterized by minimal interpretability, limiting their clinical adoption despite their promise for improving health care [1-6]. As a result, there is growing emphasis on the field of interpretable ML or explainable Artificial Intelligence to provide explanations of how models make their decisions [6-8].
JMIR Med Inform 2020;8(6):e15791
Download Citation: END BibTex RIS