Search Articles

View query in Help articles search

Search Results (1 to 3 of 3 Results)

Download search results: CSV END BibTex RIS


Comparing the Perspectives of Generative AI, Mental Health Experts, and the General Public on Schizophrenia Recovery: Case Vignette Study

Comparing the Perspectives of Generative AI, Mental Health Experts, and the General Public on Schizophrenia Recovery: Case Vignette Study

In a recent study, Elyoseph and Levkovich [25] found that Chat GPT-3.5 often underestimated the risk of suicidal ideation, thus calling into question its reliability in such critical assessments. Another study by Imran et al [49] posited that while Chat GPT may significantly influence pediatric and adolescent mental health care as a supplementary tool, it would be inadvisable and impracticable to contend that it could entirely supplant human clinical discernment.

Zohar Elyoseph, Inbar Levkovich

JMIR Ment Health 2024;11:e53043

Suicide Risk Assessments Through the Eyes of ChatGPT-3.5 Versus ChatGPT-4: Vignette Study

Suicide Risk Assessments Through the Eyes of ChatGPT-3.5 Versus ChatGPT-4: Vignette Study

for all four conditions, that is, (1) low burdensomeness and low belongingness, (2) low burdensomeness and high belongingness, (3) high burdensomeness and low belongingness, and (4) high burdensomeness and high belongingness, for the four dependent variables: (1) psychache, (2) suicidal ideation, (3) risk of suicide attempt, and (4) resilience, compared to the norms of the health professionals reported by Levi-Belz and Gamliel [38] and the results of Chat GPT-3.5 (March 14 version) reported by Elyoseph and Levkovich

Inbar Levkovich, Zohar Elyoseph

JMIR Ment Health 2023;10:e51232