Search Articles

View query in Help articles search

Search Results (1 to 7 of 7 Results)

Download search results: CSV END BibTex RIS


Survival After Radical Cystectomy for Bladder Cancer: Development of a Fair Machine Learning Model

Survival After Radical Cystectomy for Bladder Cancer: Development of a Fair Machine Learning Model

In-processing techniques incorporate fairness constraints within the model training process. These techniques aim to steer the model toward producing fair predictions. One such example is the exponentiated gradient method, which implements a reduction approach. In reduction approaches, the model is treated as a black box optimizer. The algorithm iteratively reweights the training data points based on the current model’s predictions and a chosen fairness metric.

Samuel Carbunaru, Yassamin Neshatvar, Hyungrok Do, Katie Murray, Rajesh Ranganath, Madhur Nayan

JMIR Med Inform 2024;12:e63289

Intersection of Performance, Interpretability, and Fairness in Neural Prototype Tree for Chest X-Ray Pathology Detection: Algorithm Development and Validation Study

Intersection of Performance, Interpretability, and Fairness in Neural Prototype Tree for Chest X-Ray Pathology Detection: Algorithm Development and Validation Study

Increasing the size of the tree can enhance the NPT’s expressivity; however, a larger tree leads to a more complex decision-making process, which reduces the classifier’s interpretability and can impact its performance and fairness. Investigating the relationship between interpretability, performance, and fairness will provide the basis for future studies to better align these 3 dimensions within the NPT classifier for CXR pathology detection.

Hongbo Chen, Myrtede Alfred, Andrew D Brown, Angela Atinga, Eldan Cohen

JMIR Form Res 2024;8:e59045

Resilient Artificial Intelligence in Health: Synthesis and Research Agenda Toward Next-Generation Trustworthy Clinical Decision Support

Resilient Artificial Intelligence in Health: Synthesis and Research Agenda Toward Next-Generation Trustworthy Clinical Decision Support

However, the development process of health AI and the generalization and fairness of the resulting models face significant challenges due to the inherent biases, uncertainty, variability, and quality levels of real-world data (RWD). These challenges include variable information across different settings and over time, biases affecting underrepresented groups, uncertainty from lacking or overlapping information, or data quality (DQ) issues such as incomplete or implausible information.

Carlos Sáez, Pablo Ferri, Juan M García-Gómez

J Med Internet Res 2024;26:e50295

Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach

Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach

Thus, the Ethics and Equity Workgroup (EEWG) was formed within the AIM-AHEAD Consortium to ensure that ethics and fairness are at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. Activities within the workgroup have included deliberations and discussions to develop and reach consensus on actionable guiding principles, a glossary of key terms, and other engagement tools to encourage greater attention to ethics and equity in AI and ML development.

Rachele Hendricks-Sturrup, Malaika Simmons, Shilo Anders, Kammarauche Aneni, Ellen Wright Clayton, Joseph Coco, Benjamin Collins, Elizabeth Heitman, Sajid Hussain, Karuna Joshi, Josh Lemieux, Laurie Lovett Novak, Daniel J Rubin, Anil Shanker, Talitha Washington, Gabriella Waters, Joyce Webb Harris, Rui Yin, Teresa Wagner, Zhijun Yin, Bradley Malin

JMIR AI 2023;2:e52888

Architectural Design of a Blockchain-Enabled, Federated Learning Platform for Algorithmic Fairness in Predictive Health Care: Design Science Study

Architectural Design of a Blockchain-Enabled, Federated Learning Platform for Algorithmic Fairness in Predictive Health Care: Design Science Study

The definition of fairness in ML is 2-fold: statistical notions of fairness and individual notions of fairness [13]. Statistical definitions of fairness refer to a guarantee of parity across protected demographic groups based on statistical measures, whereas individual definitions of fairness require equal treatment for individuals with similar features [13,14].

Xueping Liang, Juan Zhao, Yan Chen, Eranga Bandara, Sachin Shetty

J Med Internet Res 2023;25:e46547

Sharing Data With Shared Benefits: Artificial Intelligence Perspective

Sharing Data With Shared Benefits: Artificial Intelligence Perspective

From an economic perspective, fairness could be defined as receiving a return commensurate with the investments made. Therefore, when it comes to AI, it would be desirable for organizations to obtain AI models that perform proportionally to the amount of cost they incur. Generally, the process of data collection, preparation, and analysis is not a trivial one [20,21].

Mohammad Tajabadi, Linus Grabenhenrich, Adèle Ribeiro, Michael Leyer, Dominik Heider

J Med Internet Res 2023;25:e47540

Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review

Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review

Fairness in ML is achieved when algorithmic decision-making does not favor an individual or group based on protected attributes. Research efforts have emphasized group fairness over individual fairness, given the need for algorithms that consider existing differences between populations—whether intrinsic or extrinsic—while preventing discrimination between groups [13,21]. Crucially, improving model fairness does not necessarily require compromising accuracy overall [22].

Jonathan Huang, Galal Galal, Mozziyar Etemadi, Mahesh Vaidyanathan

JMIR Med Inform 2022;10(5):e36388