TY - JOUR AU - Sobowale, Kunmi AU - Humphrey, Daniel Kevin PY - 2025 DA - 2025/7/2 TI - Evaluating the Quality of Psychotherapy Conversational Agents: Framework Development and Cross-Sectional Study JO - JMIR Form Res SP - e65605 VL - 9 KW - large language models KW - generative AI KW - artificial intelligence KW - psychotherapy chatbots KW - conversational agent KW - ChatGPT KW - evaluation framework KW - digital health KW - chatbots KW - evaluation study KW - psychotherapy KW - AI KW - clinicians KW - researchers KW - risk evaluation KW - therapeutic alliance KW - accessibility KW - therapy KW - treatment AB - Background: Despite potential risks, artificial intelligence–based chatbots that simulate psychotherapy are becoming more widely available and frequently used by the general public. A comprehensive way of evaluating the quality of these chatbots is needed. Objective: To address this need, we developed the CAPE (Conversational Agent for Psychotherapy Evaluation) framework to aid clinicians, researchers, and lay users in assessing psychotherapy chatbot quality. We use the framework to evaluate and compare the quality of popular artificial intelligence psychotherapy chatbots on the OpenAI GPT store. Methods: We identified 4 popular chatbots on OpenAI’s GPT store. Two reviewers independently applied the CAPE framework to these chatbots, using 2 fictional personas to simulate interactions. The modular framework has 8 sections, each yielding an independent quality subscore between 0 and 1. We used t tests and nonparametric Wilcoxon signed rank tests to examine pairwise differences in quality subscores between chatbots. Results: Chatbots consistently scored highly on the sections of background information (subscores=0.83-1), conversational capabilities (subscores=0.83-1), therapeutic alliance, and boundaries (subscores=0.75-1), and accessibility (subscores=0.8-0.95). Scores were low for the therapeutic orientation (subscores=0) and monitoring and risk evaluation sections (subscores=0.67-0.75). Information on training data and knowledge base sections was not transparent (subscores=0). Except for the privacy and harm section (mean 0.017, SD 0.00; t3=∞; P<.001), there were no differences in subscores between the chatbots. Conclusions: The CAPE framework offers a robust and reliable method for assessing the quality of psychotherapy chatbots, enabling users to make informed choices based on their specific needs and preferences. Our evaluation revealed that while the popular chatbots on OpenAI’s GPT store were effective at developing rapport and were easily accessible, they failed to address essential safety and privacy functions adequately. SN - 2561-326X UR - https://formative.jmir.org/2025/1/e65605 UR - https://doi.org/10.2196/65605 DO - 10.2196/65605 ID - info:doi/10.2196/65605 ER -