Published on in Vol 6, No 10 (2022): October

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/39954, first published .
Measuring the Usability of eHealth Solutions for Patients With Parkinson Disease: Observational Study

Measuring the Usability of eHealth Solutions for Patients With Parkinson Disease: Observational Study

Measuring the Usability of eHealth Solutions for Patients With Parkinson Disease: Observational Study

Original Paper

1Department of Neurology, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany

2German Center for Neurodegenerative Diseases, Dresden, Germany

*these authors contributed equally

Corresponding Author:

Jonas Bendig, Dr med

Department of Neurology

University Hospital and Faculty of Medicine Carl Gustav Carus

Technische Universität Dresden

Fetscherstrasse 74

Dresden, 01307

Germany

Phone: 49 351 458 19675

Email: Jonas.Bendig@uniklinikum-dresden.de


Background: Parkinson disease (PD) is a neurodegenerative disorder with a variety of motor and nonmotor symptoms. Many of these symptoms can be monitored by eHealth solutions, including smartphone apps, wearable sensors, and camera systems. The usability of such systems is a key factor in long-term use, but not much is known about the predictors of successful use and preferable methods to assess usability in patients with PD.

Objective: This study tested methods to assess usability and determined prerequisites for successful use in patients with PD.

Methods: We performed comprehensive usability assessments with 18 patients with PD using a mixed methods usability battery containing the System Usability Scale, a rater-based evaluation of device-specific tasks, and qualitative interviews. Each patient performed the usability battery with 2 of 3 randomly assigned devices: a tablet app, wearable sensors, and a camera system. The usability battery was administered at the beginning and at the end of a 4-day testing period. Between usability batteries, the systems were used by the patients during 3 sessions of motor assessments (wearable sensors and camera system) and at the movement disorder ward (tablet app).

Results: In this study, the rater-based evaluation of tasks discriminated the best between the 3 eHealth solutions, whereas subjective modalities such as the System Usability Scale were not able to distinguish between the systems. Successful use was associated with different clinical characteristics for each system: eHealth literacy and cognitive function predicted successful use of the tablet app, and better motor function and lower age correlated with the independent use of the camera system. The successful use of the wearable sensors was independent of clinical characteristics. Unfortunately, patients who were not able to use the devices well provided few improvement suggestions in qualitative interviews.

Conclusions: eHealth solutions should be developed with a specific set of patients in mind and subsequently tested in this cohort. For a complete picture, usability assessments should include a rater-based evaluation of task performance, and there is a need to develop strategies to circumvent the underrepresentation of poorly performing patients in qualitative usability research.

JMIR Form Res 2022;6(10):e39954

doi:10.2196/39954

Keywords



Parkinson disease (PD) is a neurodegenerative disorder characterized by a variety of motor and nonmotor symptoms. Despite the neurodegenerative nature of the disease, dopamine replacement therapy can drastically improve symptoms and quality of life, especially in the early stages of the disease [1]. With longer disease duration, symptoms often begin to fluctuate during the day, making the exact timing and dosage of medication more important [2]. eHealth solutions are becoming increasingly available, offering the potential to remind patients of their medications, assess the extent and timing of motor fluctuations, and ultimately help guide the decision for advanced therapeutic options such as deep brain stimulation and medication pumps [3-5]. In addition, eHealth solutions enable clinicians to assess patients over extended periods of time in their home environment. This method can help improve patient care but also provide more precise and more relevant end points for clinical trials [6].

A wide range of eHealth solutions has been examined in patients with PD, but most studies focus on selected subgroups of patients, such as those in earlier stages of the disease [7]. In the clinical routine, however, patients with PD are distributed across a wide range of age groups with diverse educational backgrounds, distinct motor impairments and—in many patients—important psychiatric and cognitive comorbidities [8]. There is a paucity of studies systematically investigating barriers for the successful implementation of eHealth solutions in the heterogeneous population of patients with PD [9].

In this context, usability research provides a variety of user-based methods that can be categorized into subjective and objective measures and quantitative and qualitative assessments [10]. Quantitative methods primarily include questionnaires and task completions, with questionnaires being the most frequently used method in eHealth research. The questionnaire with the broadest implementation among usability studies is the Systems Usability Scale (SUS) [11], which provides a subjective assessment of usability by the patient. Task completions provide objective information but require a system-specific setup, which can be difficult and time-consuming and potentially limits comparability. Qualitative methods include focus groups, interviews, and think-alouds. In contrast to quantitative methods, they can be more useful in identifying specific usability problems but suffer from a lack of comparability. Moreover, qualitative methods require trained evaluators and laborious data analysis. Think-alouds and qualitative interviews were the most frequently used methods in the usability testing of eHealth solutions [11]. For patients with PD specifically, usability assessments have mainly relied on questionnaires as well as adherence monitoring and mostly reported positive results for sensor systems and smartphone/tablet apps [12-15]. Although mixed methods approaches have become more common recently, there is substantial heterogeneity in the methods, and only a minority of studies focused specifically on the usability. In the broader context of chronic conditions, a recent systematic review concluded that the usability of wearable devices is poorly measured and reported [15]. Furthermore, there is no consensus regarding the methodology to assess usability in older adults, even though investigations about the sensitivity of different methods have been explicitly recommended [16].

Against this background, we aimed to identify which methods are suitable for comprehensive usability testing in our primarily older cohort of patients with PD and which factors can predict the successful use of devices for telemedicine and home monitoring. For this objective, we designed a mixed methods usability battery based on the most commonly used quantitative and qualitative methods for eHealth solutions and tested the usability of 3 different devices, including (1) a tablet app, (2) wearable sensors, and (3) a camera system.


Study Population

In all, 18 patients were recruited from the ward for movement disorders at the University Hospital Dresden between July 2020 and September 2021. Written informed consent was obtained from all participants before inclusion in the study. Inclusion criteria were the clinically probable diagnosis of idiopathic PD by a specialist for movement disorders according to the current guidelines of the International Movement Disorders Society [17] and sufficient German language skills. Exclusion criteria were the inability to walk and any psychiatric comorbidity that excluded the patients from participating in the study according to the discretion of the investigator.

Ethics Approval

The study was approved by the institutional review board of Technische Universität Dresden, Germany (BO-EK-212052020).

Tested Systems

We assessed 3 eHealth solutions that use different paradigms: (1) guided measurements at specific time points by a camera system; (2) continuous, implicit monitoring of mobility by wearable sensors; and (3) a combination of guided and continuous measurements by a tablet app. The systems were described in detail previously [18].

Briefly, the 3D-camera system (Motognosis Amsa; Motognosis GmbH) consisted of a stand-alone PC and a depth camera (Microsoft Kinect; version 2). Patients were instructed to perform motor exercises by prerecorded videos and audio instructions. Kinematic parameters were derived from the exercises to describe patients’ mobility and symptoms.

The wearable system (PD Neurotechnology Ltd) consisted of five 9-axis inertial measurement unit sensors, worn on wrists, shanks, and the trunk. The data from the sensors were used to analyze patients’ motor status [19]. The PDMonitor mobile app was not used in this study nor was the device used for motor symptom clinical assessment and treatment modification.

The tablet app (TelePark tablet app; Intecsoft group) included a medication alert, questionnaires, fall documentation, activity documentation, and a task reminder.

The tablet app represented a system that was still at an early stage of development, whereas the 3D-camera system and the wearable sensors were already fully developed and licensed medical products.

Study Schedule

Inpatients completed 4 assessments on 4 days within a maximum period of 7 days during their stay at the movement disorders ward (Figure 1). On day 1, patients performed the baseline assessment and the first mixed methods usability testing battery (detailed below). Routine motor testing was carried out on days 2 and 3 (detailed below). Patients were filmed with the camera system and wore the wearable sensors during the motor testing sessions. Between assessments, patients used the TelePark app to complete questionnaires and an electronic version of the Hauser diary [20]. The wearable sensors and the camera system were only used or worn during the motor tests and usability batteries. Patients were encouraged to put on or remove the sensors independently but received help from the study personnel if requested. On day 4, patients performed a final round of motor testing and the second usability testing battery.

Figure 1. Schematic overview of the study schedule. To keep the assessments efficient, only 2 of the 3 devices (tablet app, camera system, and wearable sensors) were tested per patient, resulting in 3 groups of patients that used the same set of devices. UEQ: User Experience Questionnaire; SUS: System Usability Scale.
View this figure

Baseline Assessment

Patients were assessed with rater-based scales and self-report questionnaires to evaluate motor and cognitive function as well as eHealth literacy. The questionnaires were filled out digitally by the patients in the TelePark app. If patients were not able to independently complete the questionnaires on the tablet, they were supported by the investigators. The following scales and questionnaires were used in the baseline assessment: the Freezing of Gait Questionnaire (FOG-Q) [21], Hoehn and Yahr scale [22], Unified Parkinson’s Disease Rating Scale III (UPDRS III) [23], Beck Depression Inventory-II [24], Montreal Cognitive Assessment (MOCA) [25], and eHealth Literacy Scale (EHEALS) [26]. The total score of the EHEALS ranges between 8 and 40, with higher scores indicating higher self-perceived eHealth literacy.

Motor Testing

Motor testing consisted of a UPDRS III, a timed up-and-go test [27], a freezing of gait test [28], a Mini-BESTest [29], fast 360° turns, the video-instructed Motognosis Amsa protocol (finger tapping, stand up and sit down, stance with closed feet, comfortable 360° turns, stepping in place, short comfortable speed walk, and short maximum speed walk), and the operator-instructed Motognosis PASS-PD protocol (finger tapping, hand grasping, arm holding, finger-nose test, foot tapping, stand up and sit down, stance with closed feet, comfortable 360° turns, stepping in place, comfortable walk, and maximum speed walk). During the period of the assessment, patients were filmed by the 3D-camera system and wore the wearable sensors.

Usability Testing Battery

The usability testing battery was performed on the first and the last day of the study. To reduce patient burden, each patient assessed the usability of only 2 of the 3 study components (tablet app and camera, tablet app and wearables, or camera and wearables). The devices were assigned randomly to the patients by a prespecified permuted list. Usability was assessed for each device separately.

First, patients were given a standardized explanation of the device. Patients were then instructed to carry out 7 device-related tasks, which covered all important functions of the systems, as independently as possible. These tasks were setting up the camera and performing different tasks in the Amsa protocol (camera system), putting on the sensors and handling the charging procedure and the data transfer processes (wearable sensors), and using all relevant functions in the app (tablet app). The execution of the tasks was observed by the investigators and rated on a 6-item ordinal scale according to the independence of task execution (ranging from 5=“Does not need help; does not consult manual” to 0=“Can contribute nothing or almost nothing to the implementation of the task”). The sum of all 7 independence ratings was transformed into a rater-based independence score ranging from 0% (no independent use in any tasks) to 100% (fully independent use in all tasks) with the following formula:

After the task-related device testing, patients filled out the SUS [30] and were asked again how confident they felt now to use the devices alone in a home monitoring setting (confidence score from 0% to 100%). The SUS is a 10-item Likert scale to assess subjective usability, containing questions such as the perceived complexity of a system, the user’s confidence in using a system, or its learnability. The SUS has been widely used, and normative data exist allowing SUS ratings to be positioned relative to other systems [31]. Furthermore, it has been shown that the SUS can provide valid scores even with small sample sizes [32].

Finally, we conducted an interview based on domains of established usability instruments (SUS and user experience questionnaire [33]). This interview took 15 to 30 minutes and consisted of 12 open-ended questions concerning the following domains: attractivity, independent use, learnability, perspicuity, efficiency, stimulation, and novelty. For each domain, the patients were asked 2 open-ended questions about their opinion on the domain quality and about improvement suggestions in that specific domain. The same procedure was then carried out for the second device.

Data Analyses and Sample Size

Data are depicted as median with 25th and 75th percentile or as mean with SD depending on data normality as assessed by a visual inspection of histograms. To assess the differences between the systems, a Kruskal-Wallis test with post hoc Dunn test was used. Due to the small sample size and the exploratory nature of the study, no correction for multiple testing was used. Predictors of successful use were identified by correlation analysis (Spearman ρ). Significant correlations (P<.05) were visualized in a network graph with the ForceAtlas2 algorithm [34]. The temporal stability of usability outcomes was assessed by comparing the first and the second measurement with a Wilcoxon signed-rank test. Data visualization and statistical analyses were performed with Python (Statsmodels, Scipy, Matplotlib, and Seaborn packages) and Gephi software. The sample size of 12 patients per system was determined using guidelines for conducting qualitative research [35].


In total, 19 patients were included in the study, and 1 patient dropped out after the first usability battery due to personal reasons (not named). The clinical and demographic data of the remaining 18 patients are summarized in Table 1.

Table 1. Clinical and demographic data. Data are presented as mean with SD or median with absolute range.

Value
Patient, n18
Age (years), median (range)69 (37-86)
Sex (N=18), n (%)

Female7 (39)

Male11 (61)
Hoehn and Yahr stage, median (range)3 (1-4)
Disease duration (years), mean (SD)11 (7.3)
UPDRS IIIa score, mean (SD)27 (9.0)
MOCAb score, mean (SD)25 (2.7)
EHEALSc score, mean (SD)23 (8.8)
BDI-IId score, mean (SD)12 (7.4)
FOG-Qe score, mean (SD)11 (5.1)

aUPDRS III: Unified Parkinson’s Disease Rating Scale III.

bMOCA: Montreal Cognitive Assessment.

cEHEALS: eHealth Literacy Scale.

dBDI-II: Beck Depression Inventory-II.

eFOG-Q: Freezing of Gait Questionnaire.

Testing Usability Measures

The SUS is a widely used score for a quick and simple assessment of usability [31]. SUS scores (second usability battery) did not differ significantly between devices (P=.34, Kruskal-Wallis test; Figure 2A). In addition, we compared the empirical confidence scores (patient-rated) and the task-based independence scores (investigator-rated) between the 3 systems (Figure 2A). The confidence scores and the independence scores showed a pronounced ceiling effect, whereas the SUS scores were more evenly distributed (Figure 2B). Exclusively, the independence scores differentiated between the app (ie, the device that is still at an early stage of development) and the fully developed and licensed systems (P=.006, Kruskal-Wallis test; post hoc tests in Figure 2A). For the subsequent correlation analyses in this study, we therefore selected the objective independence score as the most relevant measure of successful use.

Figure 2. Comparison of independence scores, confidence scores, and SUS scores from the second usability battery: (A) Box plots and (B) histograms. P values were from Dunn test without correction after significant Kruskal-Wallis test. Box plots depict median (black line), IQR (boxes), range (whiskers), and outliers (diamonds; >75th percentile + 1.5 IQR or <25th percentile – 1.5 IQR). SUS: System Usability Scale.
View this figure

Identifying Predictors for Successful Use

To identify factors that predict whether patients are able to use a device well, we plotted correlation matrices to explore the interdependence between the rater-based independence score, the SUS, and baseline parameters. In addition to the independence scores and SUS scores, the following variables were used in the correlation analysis: age, sex, Hoehn and Yahr stage, UPDRS III, FOG-Q, MOCA, and EHEALS. The network graph of correlations visualizes that the rater-based independence scores for wearable sensors (yellow), camera system (green), and tablet app (red) do not cluster together (Figure 3). This visualization indicates that the prerequisites for successful use differ between the 3 systems. The independent use of the wearable sensors did not correlate significantly with any clinical characteristics (age: P=.07; sex: P=.38; Hoehn and Yahr stage: P=.44; UPDRS III: P=.59; FOG-Q: P=.94; MOCA: P=.40; EHEALS: P=.68), but only 3 (25%) out of 12 patients were not able to use the system fully independently. This finding implies that the sensors were usable for most of the patients regardless of their clinical characteristics. The independent use of the camera system correlated strongly with age and motor scores (FOG-Q, UPDRS III, and Hoehn and Yahr stage), and the independent use of the tablet app showed strong correlations with cognition (MOCA) and eHealth literacy (EHEALS). Table 2 shows the strongest correlations with the rater-based independence score for each system.

In contrast to the rater-based independence scores, we found no significant correlations between the subjective and more variable SUS scores with the clinical measures for the tablet app (age: P=.79; sex: P=.89; Hoehn and Yahr stage: P=.85; UPDRS III: P=.92; FOG-Q: P=.14; MOCA: P=.28; EHEALS: P=.07), the wearable sensors (age: P=.78; sex: P=.15; Hoehn and Yahr stage: P=.52; UPDRS III: P=.99; FOG-Q: P=.12; MOCA: P=.96; EHEALS: P=.19), or the camera system (age: P=.45; sex: P=.70; Hoehn and Yahr stage: P=.62; UPDRS III: P=.16; FOG-Q: P=.49; MOCA: P=.99; EHEALS: P=.26).

Figure 3. Network graph of correlations (Spearman ρ) between baseline variables, SUS scores, and independence scores from the second usability battery. Only significant correlations (P<.05, uncorrected values) are included the network. The relative size of the variables indicates the absolute number of connections. The thickness of the connections indicates the magnitude of the correlation (thicker lines indicate stronger correlations). BDI-II: Beck Depression Inventory; EHEALS: eHealth Literacy Scale; FOG-Q: Freezing of Gait Questionnaire; MOCA: Montreal Cognitive Assessment; UPDRS III: Unified Parkinson's Disease Rating Scale III; SUS: System Usability Scale.
View this figure
Table 2. Correlations of clinical characteristics with independent use. The 3 strongest correlations (Spearman ρ) with P values between clinical characteristics and independence scores for the 3 systems are shown.
Device, clinical characteristicSpearman ρP value
Tablet app

EHEALSa0.90<.001

MOCAb0.89<.001

Age–0.63.03
Camera system

FOG-Qc–0.80.002

Hoehn and Yahr–0.72.009

Age–0.71.009
Wearable sensors

d

aEHEALS: eHealth Literacy Scale.

bMOCA: Montreal Cognitive Assessment.

cFOG-Q: Freezing of Gait Questionnaire.

dFor the wearable sensors, no significant correlations were found.

Temporal Change in Usability Outcomes

To assess the system-specific learnability and stability of the usability outcomes, we compared usability outcomes between the first and second round of the usability battery on days 1 and 4, respectively. For the tablet app, independence and confidence scores did not differ significantly between the 2 time points (independence: mean 79.5%, SD 25.6% vs 75.5%, SD 25.8; P=.34; confidence: mean 75.4%, SD 24.3% vs 69.6%, SD 38.8%; P=.29). The camera system, in contrast, had a significantly higher confidence score in the second usability battery (mean 63.3%, SD 32.5% vs 84.7%, SD 19.8 %; P=.008); independence scores were high at both time points (mean 89.3%, SD 14.8 % vs 93.8%, SD 7.9%; P=.12). The wearable sensors showed a significantly higher independence score in the second usability battery (mean 91.4%, SD 9.2% vs 97.9%, SD 4.1%; P=.03); confidence ratings did not change (mean 79.5%, SD 25.6% vs 75.5%, SD 25.8%; P=.67). SUS scores did not change significantly between the 2 time points for any of the tested systems (tablet app: P=.18; camera system: P=.20; wearable sensors: P=.88). The system-specific changes in usability outcomes indicate a different learnability for each individual system and underscore the importance of longitudinal usability assessments. Furthermore, they suggest that performance and confidence may differ. The lack of difference in the SUS scores between the 2 time points is consistent with the lack of difference in the SUS scores between the 3 systems (Figure 2), suggesting that the SUS can miss important aspects of usability.

Influences on Qualitative Interviews

In the qualitative section of the first and second usability batteries, patients were asked about improvement suggestions for the eHealth solutions. To determine predictors of qualitative feedback, we counted the total number of unique improvement suggestions per patient and correlated them with usability outcomes and clinical characteristics. We found moderate-to-strong and highly significant correlations with independence scores, confidence scores, eHealth literacy, motor phenotype, and age (Figure 4). These correlations suggest that patients who were able to use the devices well gave more valid improvement suggestions than patients who did not. Patients giving more feedback were also younger, had lower motor disability, and higher eHealth literacy. SUS scores did not correlate with the number of improvement suggestions, suggesting that the subjective rating of an eHealth solution does not affect the number of improvement suggestions.

Figure 4. Linear regression plots of valid improvement suggestions with (A) independence scores, (B) confidence scores, (C) SUS scores, and (D-F) clinical characteristics. The strength of the correlation (Spearman ρ) is indicated in the plot. All P values for the Spearman correlations were <.001 except for the SUS score (P=.11). Individual values for each system are plotted. Improvement suggestions are aggregated from both usability measurements for each system. EHEALS: eHealth Literacy Scale; UPDRS III: Unified Parkinson's Disease Rating Scale III; SUS: System Usability Scale.
View this figure

Principal Findings

In this study, we performed a comprehensive usability battery on 3 eHealth solutions, using subjective and objective assessments. The objective rater-based evaluation of tasks (independence score) discriminated better between the different eHealth solutions than the subjective quantitative usability scale (SUS). Moreover, the successful use of each eHealth solution was associated with specific clinical characteristics—notably, cognitive ability and eHealth literacy for the tablet app or motor ability and age for the camera system. Finally, most improvement suggestions were provided by patients who were able to use the eHealth solutions well.

Comparison Between Usability Measures

There is a paucity of data on the sensitivity of usability testing methods [16], and optimal methods for specific eHealth solutions or cohorts have not been identified [11]. We therefore compared usability as reported by the quantitative and easy-to-use SUS with patient-rated confidence and investigator-rated independence in prespecified settings. Given that the 3 eHealth solutions investigated here (tablet app, camera system, and wearable sensors) differed strongly in complexity and development stage, we expected to find differences in all methods. However, only the rater-based independence scores showed a significant difference between the 3 technologies. The ceiling effect of the independence scores could indicate that the systems were indeed easy to use for many patients. Alternatively, the prespecified tasks were not hard enough. As the tasks were developed to cover all relevant functions of each system, we interpreted this ceiling effect as successful use. The SUS score did not show a ceiling effect, but it did not differentiate between the fully developed systems and the less developed system in our study. Furthermore, the SUS did not reflect the increased confidence and independence between the first and second time point of testing. Collectively, these findings are in line with similar studies, where successful use was not associated with higher SUS scores [36,37]. These findings suggest that this well-established scale could potentially miss important information in the population of patients with PD, and in other populations of older and cognitively impaired people. The recent development of a simplified SUS score for older adults is in line with this interpretation [38].

The improvements in confidence or independence scores for the camera system and the wearable sensors indicate that even in a short period of 4 days, older adults (1) are able to change their perspective toward eHealth solutions and (2) can learn to handle such systems. The lack of improvement for the tablet app shows that learnability is dependent on the eHealth solution, which is in line with previous results from other studies [39]. These results should caution researchers to not rely on a single test to predict successful use. Based on our results, we recommend a short rater-based test, a subjective patient-rating validated in older adults (eg, a questionnaire), and a trial period for each patient and device before applying eHealth solutions in trials or clinical practice.

Predictors of Successful Use

Predictors of successful use differ strongly between individual eHealth solutions (Figure 3). For the app, the strong associations with cognitive function and eHealth literacy indicate that both constructs need to be considered in the design of such mobile health systems with a largely software-based interface [40,41]. Hence, eHealth solutions should be developed with a specific range of cognitive function and eHealth literacy in mind and then should be tested and marketed for this group of patients. For the camera system, older patients with more severe motor symptoms had more problems, whereas the wearable sensors were usable for most patients independent of clinical characteristics. This finding is not surprising given the mainly physical nature of interacting with the wearable sensors or performing guided tasks in front of the camera. In contrast, sensors were usable by most patients regardless of their clinical characteristics. Collectively, our findings align well with the MOLD-US framework, where usability prerequisites for the app system fall into the domains of cognition and motivation and prerequisites for the camera system are associated with the physical ability [42].

The subjective aspect reported by the SUS is necessary for a patient to start the use of eHealth solutions to avoid attrition with continual use [10], and indeed, SUS scores varied considerably between participants (Figure 2). We sought to determine predictors of SUS scores. However, we were not able to determine predictors for subjective usability scores as reported by SUS, likely due to the small sample size (n=12 per system) in our study. We only observed for the app an association between the SUS and independence scores in the graph analysis (Figure 3). This analysis, therefore, needs to be addressed in subsequent studies with more participants. Moreover, attrition could not be assessed in this short and highly standardized paradigm.

Improvement Suggestions

We found a strong positive correlation between successful use and a patient’s ability to advise on possible improvements of the tested systems during the qualitative interview (Figure 4A). In other words, suggestions came mainly from individuals that did not have problems using the system. Therefore, established methods such as “think-alouds” or “focus groups” could suffer from an overrepresentation of opinions voiced by well-performing, mildly affected patients. It is not clear whether following these suggestions will improve or worsen usability for those who have trouble using the system successfully. The method of counting the total number of improvement suggestions does not take into account the quality of the suggestions; thus, the presented results should be reinvestigated more thoroughly in future studies. Furthermore, the reported correlation could also be mediated or moderated by the factors age, disease severity, cognitive status, or eHealth literacy (Figure 4D to F). However, inferring causal connections between highly interconnected variables was beyond the scope of this study, and to our knowledge, there are currently no articles that have comprehensively assessed the effects of these variables in the context of qualitative usability research. With an aging population in Western countries and a predicted rise in patients living with neurodegenerative diseases [43,44], a critical assessment of qualitative usability methods in the context of the target group is warranted.

Limitations

Limitations of our study include the small sample size, only a single recruitment site, and the controlled inpatient setting. Furthermore, the comparison of different methods is based on a subset of the existing tools that does not include techniques such as think-alouds, focus groups, or alternative measures of efficiency (eg, time to complete a task). This limitation reduces the generalizability of our findings and warrants further investigation with different systems, settings, and patient cohorts. Moreover, the high correlations between eHealth literacy, motor symptoms, cognitive impairment, and age limit the causal interpretability of the obtained results.

Conclusions

The successful use of eHealth solutions in patients with PD is highly dependent on system-specific and patient-specific characteristics. Considering the growing field of digital health and the already existing abundance of different solutions for patients with PD [4,45,46], researchers and industrial partners need to consider the heterogeneity of patients and design eHealth solution for a specific constellation of age, cognitive and motor function, as well as eHealth literacy, and these criteria can be helpful for physicians in selecting the best solution for each individual patient.

Acknowledgments

This research was funded by the European regional development fund. Medical devices from the companies PD Neurotechnology Ltd (wearable sensors) and Motognosis GmbH (camera system) were provided free of charge. Motognosis GmbH and PD Neurotechnology Ltd agreed upon cooperation with the Technical University of Dresden for the TelePark study. These agreements include the mutual use of pseudonymized clinical data and the data from each respective system for the development and improvement of algorithms. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Authors' Contributions

For the research project, JB, KFL, MS, and BHF contributed to conceptualization; AS, JL, JB, and AF contributed to project administration; and AS, JL, and JB contributed to the investigation. For statistical analysis, JB, AF, KFL, and BHF contributed to the methodology; AS and JB conducted the formal analysis; and BHF and HR contributed to supervision. For manuscript preparation, JB, AF, BHF wrote the original draft and all authors contributed to writing—review and editing. JB and BHF take responsibility for the integrity of the data and the accuracy of the data analysis.

Conflicts of Interest

None declared.

  1. Obeso JA, Stamelou M, Goetz CG, Poewe W, Lang AE, Weintraub D, et al. Past, present, and future of Parkinson's disease: a special essay on the 200th anniversary of the shaking palsy. Mov Disord 2017 Sep;32(9):1264-1310 [FREE Full text] [CrossRef] [Medline]
  2. Fox SH, Katzenschlager R, Lim S, Barton B, de Bie RMA, Seppi K, Movement Disorder Society Evidence-Based Medicine Committee. International Parkinson and movement disorder society evidence-based medicine review: update on treatments for the motor symptoms of Parkinson's disease. Mov Disord 2018 Aug;33(8):1248-1266. [CrossRef] [Medline]
  3. Sica M, Tedesco S, Crowe C, Kenny L, Moore K, Timmons S, et al. Continuous home monitoring of Parkinson's disease using inertial sensors: a systematic review. PLoS One 2021 Apr 04;16(2):e0246528 [FREE Full text] [CrossRef] [Medline]
  4. Sibley KG, Girges C, Hoque E, Foltynie T. Video-based analyses of Parkinson's disease severity: a brief review. J Parkinsons Dis 2021 Jul 16;11(s1):S83-S93 [FREE Full text] [CrossRef] [Medline]
  5. Little MA. Smartphones for remote symptom monitoring of Parkinson's disease. J Parkinsons Dis 2021 Jul 16;11(s1):S49-S53 [FREE Full text] [CrossRef] [Medline]
  6. Espay AJ, Bonato P, Nahab FB, Maetzler W, Dean JM, Klucken J, Movement Disorders Society Task Force on Technology. Technology in Parkinson's disease: challenges and opportunities. Mov Disord 2016 Sep;31(9):1272-1282 [FREE Full text] [CrossRef] [Medline]
  7. Morgan C, Rolinski M, McNaney R, Jones B, Rochester L, Maetzler W, et al. Systematic review looking at the use of technology to measure free-living symptom and activity outcomes in Parkinson's disease in the home or a home-like environment. J Parkinsons Dis 2020 Apr 03;10(2):429-454 [FREE Full text] [CrossRef] [Medline]
  8. Greenland JC, Williams-Gray CH, Barker RA. The clinical heterogeneity of Parkinson's disease and its therapeutic implications. Eur J Neurosci 2019 Feb;49(3):328-338. [CrossRef] [Medline]
  9. Cancela J, Pastorino M, Tzallas AT, Tsipouras MG, Rigas G, Arredondo MT, et al. Wearability assessment of a wearable system for Parkinson's disease remote monitoring based on a body area network of sensors. Sensors (Basel) 2014 Sep 16;14(9):17235-17255 [FREE Full text] [CrossRef] [Medline]
  10. Hornbæk K. Current practice in measuring usability: challenges to usability studies and research. Int J Hum Comput Stud 2006 Feb;64(2):79-102. [CrossRef]
  11. Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Inform 2019 Jun;126:95-104. [CrossRef] [Medline]
  12. Bouça-Machado R, Pona-Ferreira F, Leitão M, Clemente A, Vila-Viçosa D, Kauppila LA, et al. Feasibility of a mobile-based system for unsupervised monitoring in Parkinson's disease. Sensors (Basel) 2021 Jul 21;21(15):4972 [FREE Full text] [CrossRef] [Medline]
  13. Gatsios D, Antonini A, Gentile G, Marcante A, Pellicano C, Macchiusi L, et al. Feasibility and utility of mHealth for the remote monitoring of Parkinson disease: ancillary study of the PD_manager randomized controlled trial. JMIR mHealth uHealth 2020 Jun 29;8(6):e16414 [FREE Full text] [CrossRef] [Medline]
  14. Mascheroni A, Choe EK, Luo Y, Marazza M, Ferlito C, Caverzasio S, et al. The SleepFit tablet application for home-based clinical data collection in Parkinson disease: user-centric development and usability study. JMIR mHealth uHealth 2021 Jun 08;9(6):e16304 [FREE Full text] [CrossRef] [Medline]
  15. Keogh A, Argent R, Anderson A, Caulfield B, Johnston W. Assessing the usability of wearable devices to measure gait and physical activity in chronic conditions: a systematic review. J Neuroeng Rehabil 2021 Sep 15;18(1):138 [FREE Full text] [CrossRef] [Medline]
  16. Silva AG, Caravau H, Martins A, Almeida AMP, Silva T, Ribeiro ?, et al. Procedures of user-centered usability assessment for digital solutions: scoping review of reviews reporting on digital solutions relevant for older adults. JMIR Hum Factors 2021 Jan 13;8(1):e22774 [FREE Full text] [CrossRef] [Medline]
  17. Postuma RB, Berg D, Stern M, Poewe W, Olanow CW, Oertel W, et al. MDS clinical diagnostic criteria for Parkinson's disease. Mov Disord 2015 Oct;30(12):1591-1601. [CrossRef] [Medline]
  18. Bendig J, Wolf A, Mark T, Frank A, Mathiebe J, Scheibe M, et al. Feasibility of a multimodal telemedical intervention for patients with Parkinson's disease-a pilot study. J Clin Med 2022 Feb 18;11(4):1074 [FREE Full text] [CrossRef] [Medline]
  19. Kostikis N, Rigas G, Konitsiotis S, Fotiadis D. Motor fluctuations and dyskinesia. Mov Disord Clin Pract 2020 Mar 17;7(S2):S6-S44. [CrossRef]
  20. Hauser RA, Deckers F, Lehert P. Parkinson's disease home diary: further validation and implications for clinical trials. Mov Disord 2004 Dec;19(12):1409-1413. [CrossRef] [Medline]
  21. Nilsson MH, Hariz G, Wictorin K, Miller M, Forsgren L, Hagell P. Development and testing of a self administered version of the Freezing of Gait Questionnaire. BMC Neurol 2010 Sep 23;10:85 [FREE Full text] [CrossRef] [Medline]
  22. Hoehn MM, Yahr MD. Parkinsonism: onset, progression and mortality. Neurology 1967 May;17(5):427-442. [CrossRef] [Medline]
  23. Movement Disorder Society Task Force on Rating Scales for Parkinson's Disease. The Unified Parkinson's Disease Rating Scale (UPDRS): status and recommendations. Mov Disord 2003 Jul;18(7):738-750. [CrossRef] [Medline]
  24. Beck AT, Steer RA, Ball R, Ranieri W. Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. J Pers Assess 1996 Dec;67(3):588-597. [CrossRef] [Medline]
  25. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc 2005 Apr;53(4):695-699. [CrossRef] [Medline]
  26. Norman CD, Skinner HA. eHEALS: the eHealth Literacy Scale. J Med Internet Res 2006 Nov 14;8(4):e27 [FREE Full text] [CrossRef] [Medline]
  27. Podsiadlo D, Richardson S. The timed "Up & Go": a test of basic functional mobility for frail elderly persons. J Am Geriatr Soc 1991 Feb;39(2):142-148. [CrossRef] [Medline]
  28. Ziegler K, Schroeteler F, Ceballos-Baumann AO, Fietzek UM. A new rating instrument to assess festination and freezing gait in Parkinsonian patients. Mov Disord 2010 Jun 15;25(8):1012-1018. [CrossRef] [Medline]
  29. Franchignoni F, Horak F, Godi M, Nardone A, Giordano A. Using psychometric techniques to improve the Balance Evaluation Systems Test: the mini-BESTest. J Rehabil Med 2010 Apr;42(4):323-331 [FREE Full text] [CrossRef] [Medline]
  30. Brooke J. SUS: a 'quick and dirty' usability scale. In: Jordan PW, Thomas B, McClelland IL, Weerdmeester B, editors. Usability Evaluation In Industry. London, UK: Taylor Francis, CRC Press; 1996:6.
  31. Lewis JR. The System Usability Scale: past, present, and future. Int J Hum Comput Interact 2018 Mar 30;34(7):577-590. [CrossRef]
  32. Tullis TS, Stetson JN. A comparison of questionnaires for assessing website usability. 2004 Presented at: UPA 2004: 13th Annual Usability Professionals' Association Conference; June 7-11, 2004; Minneapolis, MN p. 1-12   URL: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.396.3677&rep=rep1&type=pdf
  33. Laugwitz B, Held T, Schrepp M. Construction and evaluation of a user experience questionnaire. 2008 Presented at: USAB 2008: HCI and Usability for Education and Work; November 20-21, 2008; Graz, Austria p. 63-76. [CrossRef]
  34. Jacomy M, Venturini T, Heymann S, Bastian M. ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software. PLoS One 2014 Jun 10;9(6):e98679 [FREE Full text] [CrossRef] [Medline]
  35. Guest G, Bunce A, Johnson L. How many interviews are enough? Field Methods 2016 Jul 21;18(1):59-82. [CrossRef]
  36. Broekhuis M, van Velsen L, Hermens H. Assessing usability of eHealth technology: a comparison of usability benchmarking instruments. Int J Med Inform 2019 Aug;128:24-31. [CrossRef] [Medline]
  37. Gibson A, Ryan A, Bunting B, McCauley C, Laird L, Ferry F, et al. Assessing usability testing for people living with dementia. 2016 Oct 13 Presented at: REHAB '16: 4th Workshop on ICTs for improving Patients Rehabilitation Research Techniques; October 13-14, 2016; Lisbon, Portugal p. 25-31. [CrossRef]
  38. Holden RJ. A Simplified System Usability Scale (SUS) for cognitively impaired and older adults. Proc Int Symp Hum Factors Ergon Heal Care 2020 Sep 16;9(1):180-182. [CrossRef]
  39. Barnard Y, Bradley MD, Hodgson F, Lloyd AD. Learning to use new technologies by older adults: perceived difficulties, experimentation behaviour and usability. Comput Human Behav 2013 Jul;29(4):1715-1724. [CrossRef]
  40. Kreps GL. The relevance of health literacy to mHealth. Stud Health Technol Inform 2017;240:347-355. [Medline]
  41. El Benny M, Kabakian-Khasholian T, El-Jardali F, Bardus M. Application of the eHealth literacy model in digital health interventions: scoping review. J Med Internet Res 2021 Jun 03;23(6):e23473 [FREE Full text] [CrossRef] [Medline]
  42. Wildenbos GA, Peute L, Jaspers M. Aging barriers influencing mobile health usability for older adults: a literature based framework (MOLD-US). Int J Med Inform 2018 Jun;114:66-75. [CrossRef] [Medline]
  43. GBD 2019 Dementia Forecasting Collaborators. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019. Lancet Public Health 2022 Feb 01;7(2):e105-e125 [FREE Full text] [CrossRef] [Medline]
  44. Bach J, Ziegler U, Deuschl G, Dodel R, Doblhammer-Reiter G. Projected numbers of people with movement disorders in the years 2030 and 2050. Mov Disord 2011 Oct;26(12):2286-2290. [CrossRef] [Medline]
  45. Del Din S, Kirk C, Yarnall AJ, Rochester L, Hausdorff JM. Body-worn sensors for remote monitoring of Parkinson's disease motor symptoms: vision, state of the art, and challenges ahead. J Parkinsons Dis 2021 Jul 16;11(s1):S35-S47 [FREE Full text] [CrossRef] [Medline]
  46. Linares-del Rey M, Vela-Desojo L, Cano-de la Cuerda R. Mobile phone applications in Parkinson's disease: a systematic review. Neurologia (Engl Ed) 2019 Jan;34(1):38-54 [FREE Full text] [CrossRef] [Medline]


EHEALS: eHealth Literacy Scale
FOG-Q: Freezing of Gait Questionnaire
MOCA: Montreal Cognitive Assessment
PD: Parkinson Disease
UPDRS III: Unified Parkinson’s Disease Rating Scale III
SUS: System Usability Scale


Edited by M Focsa; submitted 29.05.22; peer-reviewed by W Maetzler, J Brooke; comments to author 03.07.22; revised version received 28.08.22; accepted 03.09.22; published 25.10.22

Copyright

©Jonas Bendig, Anja Spanz, Jana Leidig, Anika Frank, Marcus Stahr, Heinz Reichmann, Kai F Loewenbrück, Björn H Falkenburger. Originally published in JMIR Formative Research (https://formative.jmir.org), 25.10.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.