Background: There is an extensive library of language tests, each with excellent psychometric properties; however, many of the tests available take considerable administration time, possibly bearing psychological strain on patients. The Short and Tailored Evaluation of Language Ability (STELA) is a simplified, tablet-based language ability assessment system developed to address this issue, with a reduced number of items and automated testing process.
Objective: The aim of this paper is to assess the administration time, internal consistency, and validity of the STELA.
Methods: The STELA consists of a tablet app, a microphone, and an input keypad for clinician’s use. The system is designed to assess language ability with 52 questions grouped into 2 comprehension modalities (auditory comprehension and reading comprehension) and 3 expression modalities (naming and sentence formation, repetition, and reading aloud). Performance in each modality was scored as the correct answer rate (0-100), and overall performance expressed as the sum of modality scores (out of 500 points).
Results: The time taken to complete the STELA was significantly less than the time for the WAB (mean 16.2, SD 9.4 vs mean 149.3, SD 64.1 minutes; P<.001). The STELA’s total score was strongly correlated with the WAB Aphasia Quotient (r=0.93, P<.001), supporting the former’s concurrent validity concerning the WAB, which is a gold-standard aphasia assessment. Strong correlations were also observed at the subscale level; STELA auditory comprehension versus WAB auditory comprehension (r=0.75, P<.001), STELA repetition versus WAB repetition (r=0.96, P<.001), STELA naming and sentence formation versus WAB naming and word finding (r=0.81, P<.001), and the sum of STELA reading comprehension or reading aloud versus WAB reading (r=0.82, P<.001). Cronbach α obtained for each modality was .862 for auditory comprehension, .872 for reading comprehension, .902 for naming and sentence formation, .787 for repetition, and .892 for reading aloud. Global Cronbach α was .961. The average of the values of item-total correlation to each subscale was 0.61 (SD 0.17).
Conclusions: Our study confirmed significant time reduction in the assessment of language ability and provided evidence for good internal consistency and validity of the STELA tablet-based aphasia assessment system.
While people who sustain brain injury from stroke or cranial trauma have higher survival rates today, remaining disabilities caused by brain damage significantly disturb patients’ daily living. Language ability is one of the important cognitive functions in executing social activities that can be impaired by brain injury. Even if individuals regain the physical ability to perform regular activities of daily living, the impairment in language function often makes it difficult for patients to fully reintegrate into their communities. Language function is fundamental to higher brain function, aiding in communication and underpinning logical thinking, social relationships, self-expression, and even dignity. Early detection and prompt and sufficient rehabilitative interventions specific to deficit severity may help such patients regain their independence and ensure reintegration into society. For this to occur, the timely and accurate assessment of language ability is crucial.
Many standardized tests are widely used for evaluating language dysfunction, such as the Western Aphasia Battery (WAB) and Boston Diagnostic Aphasia Exam in clinical settings. However, these tests take considerable time to be administered. Lengthy assessments can pose difficulties for patients with stroke or cranial trauma with language impairments, who often have trouble with long-term concentration due to effects such as inattention and limited endurance [, ]. Lengthy assessments can cause considerable stress in patients [ ], especially when impairments are severe, as they encounter questions they cannot answer. Research has shown that patients with aphasia are especially prone to stress associated with regular linguistic deficits [ , ]. Increased numbers of “unanswerable” questions exacerbate test anxiety [ ]; the resulting stress and loss of self-confidence can impede adequate progress in subsequent rehabilitation activities [ ].
Several criteria must be met to evaluate language function in a short period of time and in a treatment-oriented manner. The test should be simplified for quicker completion. Conventional language function assessment tests consist of a significant number of questions. For example, the WAB consists of 225 items. Conversely, screening tests performed in shorter amounts of time have also been developed . They usually consist of a small number of items, and their primary focus is typically to ascertain the presence of aphasia and simply estimate language ability in a single dimension.
Although these screening tools are useful for briefly checking deficits in language ability, they do not provide sufficient information for planning treatment; to apply test findings in therapy, “language” should be deconstructed and analyzed for its components. For example, aphasia is traditionally assessed across several modalities, such as auditory and reading comprehension, word and sentence production, and repetition. Clinicians construct rehabilitation plans for individual segments based on their understanding of patients’ performance in each segment . Thus, each language ability component must be evaluated separately to detect the impaired modality and to perform the modality-specific therapy. However, maximizing both simplicity and accuracy of these assessments presents a difficult task ahead. To realize a simple yet detailed language ability test, the use of computer assessment methodologies may be helpful. It may help shorten the assessment process with regard to several aspects. For example, the use of devices such as computers and tablets allows tasks to be presented on screen in a way that is easily understandable [ ]; it helps simplify test administration by partially automating the way tasks are explained, presented, and answered [ ]; the scoring system can also be automated to reduce administration time and scoring errors [ ]. Such a simplification of the test may reduce the stress of patients linked with the assessment of language ability and the respective lengthy testing [ ]. In addition, the ability of the devices to record supplemental, objective measures offers additional information to evaluate patients’ language ability (eg, reaction time, which includes critical information on specific aspects of cognitive ability [ ]) and is advantageous in the evaluation of brain functions involving complex processes, such as cognition and speech. In fact, various computer assessments have already been used in the field of cognitive testing [ - ]. However, there are still few studies on the assessment of language ability testing using digital devices. A well-validated computer-based testing for language ability would be a useful tool in clinical practice.
The Short and Tailored Evaluation of Language Ability (STELA) is a newly developed computer-based Japanese language ability assessment system for patients with aphasia, which is compact in number of tasks while having the capacity to assess language ability across multiple components. The purpose of this study was to evaluate the clinical feasibility of the STELA by investigating the time-reduction effect of the STELA compared to the WAB as the gold-standard paper-and-pencil test and its internal consistency and validity in assessing the language ability of patients with aphasia.
Study Design and Settings
This was a prospective methodological study with a repeated-measures design, in which the reliability and validity of the STELA were evaluated. The research was conducted in Fujita Health University Hospital and Nanakuri Memorial Hospital.
This study complied with the principles of the Declaration of Helsinki and was approved by the Medical Ethics Committee of Fujita Health University. All the participants provided written informed consent prior to participation (HM20-060). For privacy protection, the data were deidentified in the analysis.
Study participants included patients from the Fujita Health University Hospital’s Department of Rehabilitation and Nanakuri Memorial Hospital, who were diagnosed with impaired language functioning between July 1, 2020, and December 13, 2020. Participants were diagnosed with aphasia on the basis of the WAB results. Inclusion criteria included being 20 years or older, in healthy condition, and confirmed or suspected to have impaired language function when the informed consent was obtained. Patients were excluded if they had a severe cognitive disorder or disturbed consciousness that compromised their ability to follow the instructions during testing.
STELA Assessment System
The STELA is a functional language assessment system developed to rapidly, yet comprehensively, evaluate language ability (Sysnet Co. Ltd.). Consisting of a tablet app, a microphone, and an input keypad (for clinician’s use), the system is designed to assess language ability in 2 comprehension modalities (Auditory comprehension and Reading comprehension) and 3 expression modalities (Naming and sentence formation, Repetition, and Reading aloud). The test comprised 52 questions () grouped by difficulty level (see for system development details). For word-related items, difficulty ratings were based on the word’s frequency in regular use; sentence- and text-related items were dependent on the number of clauses. During testing, patients were presented with one question at a time on the tablet, and they responded by pressing the answer on the tablet’s touch screen. When patients had to respond verbally, their performance was recorded using the aforementioned keypad by a speech therapist. This keypad allows the user to input whether the response is correct or incorrect, whether a hint was provided, and the type of error if incorrect (eg, paraphasia and perseveration); it can also record the patient’s voice, as necessary. Responses for each modality were scored in real time according to a predetermined method; the results were displayed at the test’s conclusion. Performance in each modality was scored on the basis of the correct answer rate, wherein 100 points represents a perfect score; overall performance is expressed as the sum of modality scores (out of 500 points). Reaction time and response patterns were recorded during the session and made available as supplemental data for users’ reference.
|Variables||Items, n||Raw score|
|Naming and sentence formation||10||28|
Assessment of Aphasia
The patients were assessed with both the STELA and the Japanese version of the WAB. The WAB is a standardized test battery for evaluating aphasia, widely used as the gold standard in language rehabilitation [, ]. The reliability and validity of WAB have been previously shown [ ]. WAB measures language ability across the following eight modalities: (1) spontaneous speech; (2) auditory comprehension; (3) repetition; (4) naming and word finding; (5) reading; (6) writing; (7) apraxia; and (8) constructional, visuospatial, and calculation tasks. Performance is scored separately for each modality and globally as the WAB–Aphasia Quotient (WAB-AQ), a weighted composite of the subscale scores. The STELA and the WAB were conducted within 14 days, and the order to apply them was randomized for each patient. Both are conducted by the speech therapist in charge of each patient.
The STELA total (global) score and modality (subscale) scores as well as WAB-AQ and the subscales of WAB were used for analysis. The time required from start to finish (completion time) for both scales was also measured.
Total and group-wise comparisons of the completion time for all the tasks of the WAB and the STELA were conducted using the Wilcoxon signed rank test.
The internal consistency and validity of the STELA were evaluated. The STELA’s internal consistency was evaluated for Cronbach α  of each modality and item-total correlations. Cronbach α≥.70 is considered adequate for group comparisons, while α≥.90 is considered optimal for clinical applications [ ]. Additionally, item-total correlations, that is, the correlation of individual items with the total score of the scale, were evaluated using Spearman correlation coefficient (rho). Items with item-total correlations less than 0.30 should be regarded inconsistent with the other items [ ]. The STELA’s concurrent validity was investigated by conducting correlation analysis with the Japanese version of the WAB. Spearman correlation coefficients were used to quantify the agreement between the two tests on each of The STELA’s modalities and global scores (STELA total score versus WAB-AQ). The correlations between the following pairs of modalities were tested (STELA versus WAB): auditory comprehension versus auditory comprehension, repetition versus repetition, naming and sentence formation versus naming and word finding, and reading comprehension + reading aloud versus reading. Correlation coefficients were interpreted as follows: slight correlation; almost negligible relationships—0.00 to 0.20; low correlation—0.20 to 0.40; moderate correlation—0.40 to 0.70; high correlation, marked relationship—0.70 to 0.90; and very high correlation, very dependable relationship—0.90 to 1.00 [ ].
Sample Size Calculation
The sample size estimated Cronbach α using the Bonett method , yielding n=16 given the parameters of α=.05, β=.20, and planning value=0.70. For criterion validity, the sample size required for correlation analysis was calculated using G*power software [ ], yielding n=21 given effect size=0.50, α=.05, and β=.20 (2-tailed test). The number of samples was set to 30 considering the possibility of data loss.
In total, 31 patients participated (n=15, 48% male; n=16, 52% female; mean age 59, SD 13.4 years). Their primary diseases were cerebral infarction (n=12, 39%), cerebral hemorrhage (n=13, 42%), subarachnoid hemorrhage (n=2, 6%), brain tumor (n=4, 13%), and cerebral contusion (n=1, 3%). On average, the STELA was administered 91.5 (SD 128.3) days after the event and 4.5 (SD 3.4) days apart from the WAB. Patients’ global scores were the STELA total score=362.5 (SD 12.2; out of 500) versus WAB-AQ=66.9 (SD 28.5; out of 100). The score of the STELA was not normally distributed (P=.003), while that of the WAB-AQ was distributed normally (P=.08;).
The time taken to complete the STELA was successfully measured in 27 patients (4 missing values). The time taken to complete the STELA was significantly less than the time for WAB (mean [SD]: 16.2 [9.4] vs 149.3 [64.1] minutes; degree of freedom=26, signed rank statistic (S)=189.0, P<.001): this tendency was significant in every WAB category of aphasia severity (WAB-AQ>80: 13.5, SD 4.2 vs 119.5, SD 66.7 minutes, degree of freedom=10, S=33.0, P=.001; 80≥WAB-AQ>40: 21.0, SD 10.6 vs 182.7, SD 48.2 minutes, degree of freedom=9, S=27.5, P=.002; 0≥WAB-AQ>40: 21.5, SD 5.7 vs 131.7, SD 64.2 minutes, degree of freedom=5, S=10.5, P=.03;).
The results of the internal consistency evaluation are shown in. Cronbach α coefficients obtained for each modality were Auditory comprehension=.862, Reading comprehension=.872, Naming and sentence formation=.902, Repetition=.787, and Reading aloud=.892. Global Cronbach α was calculated as .961. The average of the values of item-total correlation (Spearman rho) to each subscale was 0.61 (0.17). Total item correlations were 0.30 or more for all items and were significant for all but the following four items: 2 items from the word comprehension in Auditory comprehension, 1 item from the paragraph comprehension in Auditory comprehension, and 1 from the paragraph comprehension in Reading comprehension. For each item, Cronbach α was recomputed for each subscale without it—none of the resulting alpha-without-the-item values exceeded the original α by ≥.10.
The STELA’s total score strongly correlated with WAB-AQ (ρ=0.93: very high correlation, P<.001;). Strong correlations were also observed at the subscale level, concerning auditory comprehension (ρ=0.75: high correlation, P<.001), repetition (ρ=0.96: very high correlation, P<.001), naming and sentence formation (vs WAB naming and word finding:ρ=0.81: high correlation, P<.001), and the sum of reading comprehension and reading aloud (vs WAB reading: ρ=0.82: high correlation, P<.001; ).
|Item-total correlation||Alpha without an item||Cronbach alphaa|
|Naming and sentence formation||.902|
aCronbach alpha total=.961.
|STELA vs WAB||Spearman correlation coefficient|
|Auditory comprehension vs auditory comprehension||0.75|
|Repetition vs repetition||0.96|
|Naming and sentence formation vs naming and word finding||0.81|
|Reading comprehension and reading aloud vs reading||0.82|
|Total vs AQa||0.93|
aAQ: Aphasia Quotient.
This study assessed the clinical feasibility and validity of the STELA, a tablet-based system for evaluating aphasia, by evaluating the administration time, internal consistency, and concurrent validity. The time taken to complete the STELA was significantly less than the time for WAB. The STELA’s total score was strongly correlated with the WAB-AQ, supporting the STELA’s concurrent validity with the WAB as a gold-standard aphasia assessment. Cronbach α coefficients and the values of item-total correlation supported the internal consistency of the STELA.
The STELA took an average of 16 minutes to be administered, approximately one-tenth the duration of WAB, demonstrating a reduced test-taking burden. Long testing sessions are typical of cognitive assessments, including aphasia, causing patients to experience fatigue and stress . Reducing administration time positively influences outcomes by decreasing the time spent in rehabilitation sessions and improving patients’ compliance with training exercises. Given its small question inventory and computerized format, the STELA can be administered in a short period of time; briefness is expected to reduce stress and counteract demotivation for rehabilitation patients.
The STELA’s internal consistency was supported for all modalities and overall, by very high Cronbach α coefficients, measured at .96 for the whole scale and ranging from .79 to .90 for its subscales. Furthermore, all item-total correlations measured in each subscale were .30 or more, and statistically significant except three of them; significance level of correlations for the exceptions were marginal (ie, word comprehension item in auditory comprehension and paragraph comprehension item in auditory comprehension and reading comprehension). Low variation in the response data may be responsible for these exceptions, as the first item of the word comprehension set is the easiest question within the modality, while the third item of the news text comprehension set is the hardest. Nevertheless, no alpha-without-the-item value in any modality (these items included) exceeded the corresponding Cronbach α with the item included by over .10, reflecting good homogeneity in each subscale’s item set.
The STELA’s total score was strongly correlated with WAB-AQ, supporting the former’s concurrent validity concerning a gold-standard aphasia assessment. The stronger the intertest correlations observed at the subscale level, the further support it provides for the STELA’s validity in the corresponding modalities of language function.
Further integration of digital technology can allow the STELA to assess language ability even more rapidly while keeping the granularity. For example, the employment of computer adaptive testing methodology  may contribute to further shorten the administration time of the test and reduce the stress of the patients who receive the assessment. Freeing severely impaired patients from distress caused by continually confronting them with challenging questions could be critical to support their motivation to engage in rehabilitation training and adherence. According to the self-determination theory, the feeling of incompetency or lack of control can cause amotivation, which potentially jeopardizes activity adherence [ , ]. Strategies to alleviate the distress caused by an inability to answer questions must be further investigated for language assessments; global assessments wherein difficulty level is adjusted according to impairment severity may help. Nonetheless, applying the approach in a population wherein symptoms vary significantly, such as aphasia, could prove very complicated. Hence, a more in-depth examination of techniques for simplifying tests through digital technology is required.
Since the STELA evaluates language ability using a tablet-based system, in severe cases, patients’ performance could be affected by difficulty in operating the tablet due to concurrent cognitive dysfunction. The system’s scope of application requires further investigation, along with usability concerns (eg, steps to take if patients have trouble using the tablet). Additionally, test-retest reliability was not investigated in this study, as our participants were primarily in the subacute phase after their cerebrovascular event, a period wherein aphasia symptoms can fluctuate significantly in a short period. To evaluate the STELA’s test-retest reliability, a study with a patient population in the chronic phase of illness should be further considered.
In this study, clinical feasibility of the STELA tablet-based aphasia assessment system was investigated. The results showed the significantly shorter administration time of the STELA compared with that of the WAB as a gold-standard paper-and-pencil test, and the data also supported the internal consistency and the concurrent validity with WAB. These results support the potential usefulness of the STELA in daily rehabilitation practice.
The data collected and analyzed during the study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The research team leased the STELA from Sysnet Co. Ltd, manufacturer of the STELA. The authors declare no other conflict of interest associated with this manuscript.
Supplemental methods.DOCX File , 20 KB
- Robertson IH, Ridgeway V, Greenfield E, Parr A. Motor recovery after stroke depends on intact sustained attention: a 2-year follow-up study. Neuropsychology 1997 Apr;11(2):290-295. [CrossRef] [Medline]
- Mathias JL, Wheaton P. Changes in attention and information-processing speed following severe traumatic brain injury: a meta-analytic review. Neuropsychology 2007 Mar;21(2):212-223. [CrossRef] [Medline]
- Teng EL, Manly JJ. Neuropsychological testing: helpful or harmful? Alzheimer Dis Assoc Disord 2005;19(4):267-271. [CrossRef] [Medline]
- Laures-Gore J, Hamilton A, Matheny K. Coping resources, perceived stress, and life experiences in individuals with aphasia. The Aphasiology Archive. 2006. URL: http://aphasiology.pitt.edu/2190/1/230.pdf [accessed 2023-01-26]
- Laures-Gore JS, Buchanan TW. Aphasia and the neuropsychobiology of stress. J Clin Exp Neuropsychol 2015;37(7):688-700. [CrossRef] [Medline]
- Gass CS, Curiel RE. Test anxiety in relation to measures of cognitive and intellectual functioning. Arch Clin Neuropsychol 2011 Aug;26(5):396-404. [CrossRef] [Medline]
- Buckelew SP, Hannay HJ. Relationships among anxiety, defensiveness, sex, task difficulty, and performance on various neuropsychological tasks. Percept Mot Skills 1986 Oct;63(2 Pt 2):711-718. [CrossRef] [Medline]
- El Hachioui H, Visch-Brink EG, de Lau LML, van de Sandt-Koenderman MWME, Nouwens F, Koudstaal PJ, et al. Screening tests for aphasia in patients with stroke: a systematic review. J Neurol 2017 Feb;264(2):211-220 [FREE Full text] [CrossRef] [Medline]
- Albert ML. Treatment of aphasia. Arch Neurol 1998 Nov 01;55(11):1417-1419. [CrossRef] [Medline]
- Gur RC, Ragland JD, Moberg PJ, Turner TH, Bilker WB, Kohler C, et al. Computerized neurocognitive scanning: I. Methodology and validation in healthy people. Neuropsychopharmacology 2001 Nov;25(5):766-776. [CrossRef] [Medline]
- Bauer RM, Iverson GL, Cernich AN, Binder LM, Ruff RM, Naugle RI. Computerized neuropsychological assessment devices: joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. Clin Neuropsychol 2012;26(2):177-196 [FREE Full text] [CrossRef] [Medline]
- Parsons TD, McMahan T, Kane R. Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms. Clin Neuropsychol 2018 Jan;32(1):16-41. [CrossRef] [Medline]
- Kim H, Kim J, Kim DY, Heo J. Differentiating between aphasic and nonaphasic stroke patients using semantic verbal fluency measures with administration time of 30 seconds. Eur Neurol 2011;65(2):113-117. [CrossRef] [Medline]
- Wild K, Howieson D, Webbe F, Seelye A, Kaye J. Status of computerized cognitive testing in aging: a systematic review. Alzheimers Dement 2008 Nov;4(6):428-437 [FREE Full text] [CrossRef] [Medline]
- Zygouris S, Tsolaki M. Computerized cognitive testing for older adults: a review. Am J Alzheimers Dis Other Demen 2015 Feb;30(1):13-28 [FREE Full text] [CrossRef] [Medline]
- Charalambous AP, Pye A, Yeung WK, Leroi I, Neil M, Thodi C, et al. Tools for App- and Web-Based Self-Testing of Cognitive Impairment: Systematic Search and Evaluation. J Med Internet Res 2020 Jan 17;22(1):e14551 [FREE Full text] [CrossRef] [Medline]
- Behrens A, Berglund JS, Anderberg P. CoGNIT Automated Tablet Computer Cognitive Testing in Patients With Mild Cognitive Impairment: Feasibility Study. JMIR Form Res 2022 Mar 11;6(3):e23589 [FREE Full text] [CrossRef] [Medline]
- Kertesz A, Poole E. The aphasia quotient: the taxonomic approach to measurement of aphasic disability. Can J Neurol Sci 1974 Feb;1(1):7-16. [Medline]
- Kertesz A. The Western Aphasia Battery: a systematic review of research and clinical applications. Aphasiology 2020 Dec 31;36(1):21-50. [CrossRef]
- Shewan CM, Kertesz A. Reliability and validity characteristics of the Western Aphasia Battery (WAB). J Speech Hear Disord 1980 Aug;45(3):308-324. [CrossRef] [Medline]
- Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951 Sep;16(3):297-334. [CrossRef]
- Bland JM, Altman DG. Cronbach's alpha. BMJ 1997 Feb 22;314(7080):572 [FREE Full text] [CrossRef] [Medline]
- Leong F, Austin J. The Psychology Research Handbook: A Guide for Graduate Students and Research Assistants. Thousand Oaks, California, USA: Sage; 2005.
- Guilford JP. Fundamental Statistics in Psychology and Education. New York, US: McGraw Hill; 1942.
- Bonett DG. Sample Size Requirements for Testing and Estimating Coefficient Alpha. Journal of Educational and Behavioral Statistics 2016 Nov 23;27(4):335-340. [CrossRef]
- Faul F, Erdfelder E, Buchner A, Lang A. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav Res Methods 2009 Nov;41(4):1149-1160. [CrossRef] [Medline]
- Fergadiotis G, Casilio M, Hula WD, Swiderski A. Computer Adaptive Testing for the Assessment of Anomia Severity. Semin Speech Lang 2021 Jun;42(3):180-191. [CrossRef] [Medline]
- Kilpatrick M, Hebert E, Jacobsen D. Physical Activity Motivation: A Practitioner's Guide to Self-Determination Theory. Journal of Physical Education, Recreation & Dance 2002 Apr;73(4):36-41. [CrossRef]
- Brown K, Ryan R. Fostering healthy self-regulation from within and without: A self-determination theory perspective. In: Positive Psychology In Practice. Hoboken, NJ, USA: Wiley; 2004:105-126.
|S: signed rank statistic|
|STELA: Short and Tailored Evaluation of Language Ability|
|WAB: Western Aphasia Battery|
|WAB-AQ: WAB–Aphasia Quotient|
Edited by A Mavragani; submitted 07.09.22; peer-reviewed by A Miguel-Cruz; comments to author 13.12.22; revised version received 01.01.23; accepted 03.01.23; published 08.02.23Copyright
©Yoko Inamoto, Masahiko Mukaino, Sayuri Imaeda, Manami Sawada, Kumi Satoji, Ayako Nagai, Satoshi Hirano, Hideto Okazaki, Eiichi Saitoh, Shigeru Sonoda, Yohei Otaka. Originally published in JMIR Formative Research (https://formative.jmir.org), 08.02.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.