Search Results (1 to 10 of 10 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 6 JMIR Mental Health
- 1 JMIR Diabetes
- 1 JMIR Formative Research
- 1 JMIR Neurotechnology
- 1 JMIR Research Protocols
- 0 Journal of Medical Internet Research
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Human Factors
- 0 JMIR Medical Informatics
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)
Go back to the top of the page Skip and go to footer section
Go back to the top of the page Skip and go to footer section

An Ethical Perspective on the Democratization of Mental Health With Generative AI
JMIR Ment Health 2024;11:e58011
Download Citation: END BibTex RIS
Go back to the top of the page Skip and go to footer section
Go back to the top of the page Skip and go to footer section

In a recent study, Elyoseph and Levkovich [25] found that Chat GPT-3.5 often underestimated the risk of suicidal ideation, thus calling into question its reliability in such critical assessments. Another study by Imran et al [49] posited that while Chat GPT may significantly influence pediatric and adolescent mental health care as a supplementary tool, it would be inadvisable and impracticable to contend that it could entirely supplant human clinical discernment.
JMIR Ment Health 2024;11:e53043
Download Citation: END BibTex RIS

Against this backdrop, Elyoseph et al [19] conducted a pivotal study in which the emotion recognition capabilities of LLMs, focusing on Chat GPT-3.5 (Open AI) [20], were gauged. Through the LEAS [4], Chat GPT-3.5 demonstrated an exceptional ability to differentiate and elucidate emotions from textual cues, outperforming human sample norms (receiving a score higher in 4 SDs than the human sample).
JMIR Ment Health 2024;11:e54369
Download Citation: END BibTex RIS

Suicide Risk Assessments Through the Eyes of ChatGPT-3.5 Versus ChatGPT-4: Vignette Study
mean (SD) scores for all four conditions, that is, (1) low burdensomeness and low belongingness, (2) low burdensomeness and high belongingness, (3) high burdensomeness and low belongingness, and (4) high burdensomeness and high belongingness, for the four dependent variables: (1) psychache, (2) suicidal ideation, (3) risk of suicide attempt, and (4) resilience, compared to the norms of the health professionals reported by Levi-Belz and Gamliel [38] and the results of Chat GPT-3.5 (March 14 version) reported by Elyoseph
JMIR Ment Health 2023;10:e51232
Download Citation: END BibTex RIS
Go back to the top of the page Skip and go to footer section
Go back to the top of the page Skip and go to footer section