TY - JOUR AU - Hadar-Shoval, Dorit AU - Lvovsky, Maya AU - Asraf, Kfir AU - Shimoni, Yoav AU - Elyoseph, Zohar PY - 2025 DA - 2025/2/24 TI - The Feasibility of Large Language Models in Verbal Comprehension Assessment: Mixed Methods Feasibility Study JO - JMIR Form Res SP - e68347 VL - 9 KW - large language models KW - verbal comprehension assessment KW - artificial intelligence KW - AI in psychodiagnostics KW - personalized intelligence tests KW - verbal comprehension index KW - Wechsler Adult Intelligence Scale KW - WAIS-III KW - psychological test validity KW - ethics in computerized cognitive assessment AB - Background: Cognitive assessment is an important component of applied psychology, but limited access and high costs make these evaluations challenging. Objective: This study aimed to examine the feasibility of using large language models (LLMs) to create personalized artificial intelligence–based verbal comprehension tests (AI-BVCTs) for assessing verbal intelligence, in contrast with traditional assessment methods based on standardized norms. Methods: We used a within-participants design, comparing scores obtained from AI-BVCTs with those from the Wechsler Adult Intelligence Scale (WAIS-III) verbal comprehension index (VCI). In total, 8 Hebrew-speaking participants completed both the VCI and AI-BVCT, the latter being generated using the LLM Claude. Results: The concordance correlation coefficient (CCC) demonstrated strong agreement between AI-BVCT and VCI scores (Claude: CCC=.75, 90% CI 0.266-0.933; GPT-4: CCC=.73, 90% CI 0.170-0.935). Pearson correlations further supported these findings, showing strong associations between VCI and AI-BVCT scores (Claude: r=.84, P<.001; GPT-4: r=.77, P=.02). No statistically significant differences were found between AI-BVCT and VCI scores (P>.05). Conclusions: These findings support the potential of LLMs to assess verbal intelligence. The study attests to the promise of AI-based cognitive tests in increasing the accessibility and affordability of assessment processes, enabling personalized testing. The research also raises ethical concerns regarding privacy and overreliance on AI in clinical work. Further research with larger and more diverse samples is needed to establish the validity and reliability of this approach and develop more accurate scoring procedures. SN - 2561-326X UR - https://formative.jmir.org/2025/1/e68347 UR - https://doi.org/10.2196/68347 DO - 10.2196/68347 ID - info:doi/10.2196/68347 ER -