Search Articles

View query in Help articles search

Search Results (1 to 10 of 29 Results)

Download search results: CSV END BibTex RIS


Facilitating Trust Calibration in Artificial Intelligence–Driven Diagnostic Decision Support Systems for Determining Physicians’ Diagnostic Accuracy: Quasi-Experimental Study

Facilitating Trust Calibration in Artificial Intelligence–Driven Diagnostic Decision Support Systems for Determining Physicians’ Diagnostic Accuracy: Quasi-Experimental Study

Fourth, two researchers (T Sakamoto and YH) independently determined whether the final diagnosis was included within the AI-generated list of differential diagnoses; any inconsistencies were resolved by discussion. The accuracy of the AI differential diagnosis list was 172/381 (45.1%). Fifth, two researchers (T Sakamoto and YH) independently classified the commonality of the final diagnosis (common or uncommon disease) and the typicality of the clinical presentation (typical or atypical presentation).

Tetsu Sakamoto, Yukinori Harada, Taro Shimizu

JMIR Form Res 2024;8:e58666

Evaluating ChatGPT-4’s Accuracy in Identifying Final Diagnoses Within Differential Diagnoses Compared With Those of Physicians: Experimental Study for Diagnostic Cases

Evaluating ChatGPT-4’s Accuracy in Identifying Final Diagnoses Within Differential Diagnoses Compared With Those of Physicians: Experimental Study for Diagnostic Cases

We used our data set from a previous study (TH, YH, KM, T Sakamoto, KT, T Shimizu. Diagnostic performance of generative artificial intelligences for a series of complex case reports. unpublished data, November 2023). From the Pub Med search, we identified a total of 557 case reports. We excluded the nondiagnosed cases (130 cases) and the pediatric cases, aged younger than 10 years (35 cases). The exclusion criteria were based on the previous research for CDSS [31].

Takanobu Hirosawa, Yukinori Harada, Kazuya Mizuta, Tetsu Sakamoto, Kazuki Tokumasu, Taro Shimizu

JMIR Form Res 2024;8:e59267