Search Articles

View query in Help articles search

Search Results (1 to 10 of 17 Results)

Download search results: CSV END BibTex RIS


Facilitating Trust Calibration in Artificial Intelligence–Driven Diagnostic Decision Support Systems for Determining Physicians’ Diagnostic Accuracy: Quasi-Experimental Study

Facilitating Trust Calibration in Artificial Intelligence–Driven Diagnostic Decision Support Systems for Determining Physicians’ Diagnostic Accuracy: Quasi-Experimental Study

Fourth, two researchers (T Sakamoto and YH) independently determined whether the final diagnosis was included within the AI-generated list of differential diagnoses; any inconsistencies were resolved by discussion. The accuracy of the AI differential diagnosis list was 172/381 (45.1%). Fifth, two researchers (T Sakamoto and YH) independently classified the commonality of the final diagnosis (common or uncommon disease) and the typicality of the clinical presentation (typical or atypical presentation).

Tetsu Sakamoto, Yukinori Harada, Taro Shimizu

JMIR Form Res 2024;8:e58666

Evaluating ChatGPT-4’s Accuracy in Identifying Final Diagnoses Within Differential Diagnoses Compared With Those of Physicians: Experimental Study for Diagnostic Cases

Evaluating ChatGPT-4’s Accuracy in Identifying Final Diagnoses Within Differential Diagnoses Compared With Those of Physicians: Experimental Study for Diagnostic Cases

We used our data set from a previous study (TH, YH, KM, T Sakamoto, KT, T Shimizu. Diagnostic performance of generative artificial intelligences for a series of complex case reports. unpublished data, November 2023). From the Pub Med search, we identified a total of 557 case reports. We excluded the nondiagnosed cases (130 cases) and the pediatric cases, aged younger than 10 years (35 cases). The exclusion criteria were based on the previous research for CDSS [31].

Takanobu Hirosawa, Yukinori Harada, Kazuya Mizuta, Tetsu Sakamoto, Kazuki Tokumasu, Taro Shimizu

JMIR Form Res 2024;8:e59267

Longitudinal Changes in Diagnostic Accuracy of a Differential Diagnosis List Developed by an AI-Based Symptom Checker: Retrospective Observational Study

Longitudinal Changes in Diagnostic Accuracy of a Differential Diagnosis List Developed by an AI-Based Symptom Checker: Retrospective Observational Study

Final diagnoses were further categorized into common or uncommon diagnoses based on whether the incidence was more than 1 in 2000 (common disease) or not (uncommon disease) [33]; unclear cases were judged by 2 researchers (YH and T Sakamoto) through discussion. According to the final diagnosis and medical history created by the AI-based symptom checker, 2 researchers (YH and T Sakamoto) independently judged all cases as typical or atypical, and conflicts were resolved by discussion.

Yukinori Harada, Tetsu Sakamoto, Shu Sugimoto, Taro Shimizu

JMIR Form Res 2024;8:e53985

Effects of Combinational Use of Additional Differential Diagnostic Generators on the Diagnostic Accuracy of the Differential Diagnosis List Developed by an Artificial Intelligence–Driven Automated History–Taking System: Pilot Cross-Sectional Study

Effects of Combinational Use of Additional Differential Diagnostic Generators on the Diagnostic Accuracy of the Differential Diagnosis List Developed by an Artificial Intelligence–Driven Automated History–Taking System: Pilot Cross-Sectional Study

Second, the other research physicians (T Sakamoto and ST) independently developed 2 additional DDx lists (the second and third lists) per case using 2 DDx generators (Isabel Pro and the AI diagnostic support system for general internal medicine) based on the patient’s age, sex, and medical history generated by the AI-driven automated medical history–taking system without reading the index lists generated by the AI-driven automated medical history–taking system.

Yukinori Harada, Shusaku Tomiyama, Tetsu Sakamoto, Shu Sugimoto, Ren Kawamura, Masashi Yokose, Arisa Hayashi, Taro Shimizu

JMIR Form Res 2023;7:e49034