@Article{info:doi/10.2196/68347, author="Hadar-Shoval, Dorit and Lvovsky, Maya and Asraf, Kfir and Shimoni, Yoav and Elyoseph, Zohar", title="The Feasibility of Large Language Models in Verbal Comprehension Assessment: Mixed Methods Feasibility Study", journal="JMIR Form Res", year="2025", month="Feb", day="24", volume="9", pages="e68347", keywords="large language models; verbal comprehension assessment; artificial intelligence; AI in psychodiagnostics; personalized intelligence tests; verbal comprehension index; Wechsler Adult Intelligence Scale; WAIS-III; psychological test validity; ethics in computerized cognitive assessment", abstract="Background: Cognitive assessment is an important component of applied psychology, but limited access and high costs make these evaluations challenging. Objective: This study aimed to examine the feasibility of using large language models (LLMs) to create personalized artificial intelligence--based verbal comprehension tests (AI-BVCTs) for assessing verbal intelligence, in contrast with traditional assessment methods based on standardized norms. Methods: We used a within-participants design, comparing scores obtained from AI-BVCTs with those from the Wechsler Adult Intelligence Scale (WAIS-III) verbal comprehension index (VCI). In total, 8 Hebrew-speaking participants completed both the VCI and AI-BVCT, the latter being generated using the LLM Claude. Results: The concordance correlation coefficient (CCC) demonstrated strong agreement between AI-BVCT and VCI scores (Claude: CCC=.75, 90{\%} CI 0.266-0.933; GPT-4: CCC=.73, 90{\%} CI 0.170-0.935). Pearson correlations further supported these findings, showing strong associations between VCI and AI-BVCT scores (Claude: r=.84, P<.001; GPT-4: r=.77, P=.02). No statistically significant differences were found between AI-BVCT and VCI scores (P>.05). Conclusions: These findings support the potential of LLMs to assess verbal intelligence. The study attests to the promise of AI-based cognitive tests in increasing the accessibility and affordability of assessment processes, enabling personalized testing. The research also raises ethical concerns regarding privacy and overreliance on AI in clinical work. Further research with larger and more diverse samples is needed to establish the validity and reliability of this approach and develop more accurate scoring procedures. ", issn="2561-326X", doi="10.2196/68347", url="https://formative.jmir.org/2025/1/e68347", url="https://doi.org/10.2196/68347" }