This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
Digital registries have been shown to provide an efficient way of gaining a better understanding of the clinical complexity and long-term progression of diseases. The paperless method of electronic data capture (EDC) during a patient interview saves both time and resources. In the prospective multicenter project “Digital Dementia Registry Bavaria (digiDEM Bayern),” interviews are also performed on site in rural areas with unreliable internet connectivity. It must be ensured that EDC can still be performed in such a context and that there is no need to fall back on paper-based questionnaires. In addition to a web-based data collection solution, the EDC system REDCap (Research Electronic Data Capture) offers the option to collect data offline via an app and to synchronize it afterward.
The aim of this study was to evaluate the usability of the REDCap app as an offline EDC option for a lay user group and to examine the necessary technology acceptance of using mobile devices for data collection. The feasibility of the app-based offline data collection in the digiDEM Bayern dementia registry project was then evaluated before going live.
An exploratory mixed method design was employed in the form of an on-site usability test with the “Thinking Aloud” method combined with an online questionnaire including the System Usability Scale (SUS). The acceptance of mobile devices for data collection was surveyed based on five categories of the technology acceptance model.
Using the “Thinking Aloud” method, usability issues were identified and solutions were accordingly derived. Evaluation of the REDCap app resulted in a SUS score of 74, which represents “good” usability. After evaluating the technology acceptance questionnaire, it can be concluded that the lay user group is open to mobile devices as interview tools.
The usability evaluation results show that a lay user group generally agree that data collecting partners in the digiDEM project can handle the REDCap app well. The usability evaluation provided statements about positive aspects and could also identify usability issues relating to the REDCap app. In addition, the current technology acceptance in the sample showed that heterogeneous groups of different ages with diverse experiences in handling mobile devices are also ready for the use of app-based EDC systems. Based on these results, it can be assumed that the offline use of an app-based EDC system on mobile devices is a viable solution for collecting data in a decentralized registry–based research project.
Patient registries have proven to be valuable tools for public health surveillance and research studies [
To successfully integrate an EDC system into a registry-based research project, it is essential to coordinate the system requirements and usability with future users in advance [
The project “Digital Dementia Registry Bavaria (digiDEM Bayern)” [
Thus, the objectives of this study were to evaluate the usability of the REDCap app as an option for offline electronic data collection and to examine whether the target user group has the necessary technology acceptance for data collection using mobile devices. This study should help to identify potential barriers and evaluate the feasibility of the REDCap app in a registry study with a large number of distributed nonexpert data collection partners and the need for offline on-site data collection.
To foster dementia research, the registry project digiDEM Bayern [
In the digiDEM project, data collection for the registry is carried out by approximately 300 so-called “digiDEM partners,” who are employees (such as nursing assistants and home health aides) from, for example, community counseling bodies, memory clinics, daycare facilities, or outpatient care organizations distributed across Bavaria that counsel or care for people with dementia and family caregivers. During face-to-face surveys involving interviews of people with dementia and their caregivers, the digiDEM partners enter various types of information [
In digiDEM, we use REDCap as our EDC system [
In our usability evaluation, the app was used “out of the box” as offered by REDCap. The rendering style of the app was retained at the default setting “New render form.” Other than the current German language pack “German (DE)” being activated, no further adjustments were made to the REDCap app. The app is provided in English by default; however, there is an option to activate a language file to translate the user interface. Unfortunately, the German language files do not yet cover a full user interface translation (for some screenshots, see
Participants were provided with a tablet (Apple iPad Air 2 with iOS 13.7) to carry out the predefined tasks. The REDCap app (version 5.9.6) was preinstalled on the tablet and a dummy registry project with the test questionnaire for the usability study was set up. A user account was set up in advance for the participants. The test questionnaire contains a subset of questions from the original digiDEM questionnaire [
We performed a mixed methods study in our usability evaluation with an exploratory sequential design [
Using a mixed methods approach, we were able to triangulate several data sources and hence consider the research question from different perspectives [
The test procedure consisted of two sequential independent parts, as illustrated in
We pretested the test procedure with four scientific project members to determine its suitability for obtaining rich data to address the proposed research objectives. Furthermore, technical and operational problems were addressed so that they could be excluded during the test.
Before starting the test procedure, the participants were informed about the app’s purpose and the test procedure. A demonstration video was produced in advance and shown before the test to familiarize participants with the Thinking Aloud method. In addition, a test manual was created, including all questions (for the participant) and answers (for the simulated patient) for the tasks in the test survey (
Overview of the systems and methods used during the test procedure. SUS: System Usability Scale; TAM: technology acceptance model; REDCap: Research Electronic Data Capture.
Quantitative usability questionnaires cannot provide precise information about why a participant has rated usability in a particular way, and no direct usability problems can be derived from these responses [
For the Thinking Aloud test, a digiDEM on-site interview situation was simulated. The participant had to enter the data into the app while interviewing people with dementia. The interviewee was simulated by a research assistant and was the same person for all participants. In the test survey, the participant had to complete three predefined tasks in the REDCap app. The tasks increased in complexity and represented realistic examples of tasks in the digiDEM data collection pathway. The first task required offline data collection in the form of a baseline interview, which included questions such as “What is your marital status?” or “Is there a medically confirmed diagnosis of dementia?” (
After completing the three tasks, the participant had to fill out an online questionnaire by means of the SoSci Survey tool [
The sociodemographic part included three closed questions on age, gender, and experience with technical devices. The SUS is a standardized scoring questionnaire that ensures a valid and reliable measurement for usability [
To rule out the possibility that a dismissive attitude toward mobile devices for data collection leads to poor usability, we evaluated technology acceptance based on the TAM. According to Davis [
We took care that the sociodemographic characteristics of the sample were as broadly distributed as possible to guarantee validity and trustworthiness of the qualitative data [
We estimated requiring a minimum sample size of 12 participants based on our test procedure for a proper usability evaluation. Because the Thinking Aloud method provides a rich source of qualitative data [
Participants in the study were selected from digiDEM partners who will eventually carry out the data collection as part of the digiDEM project. There were no limitations based on age, profession, or experience with information technology. Because digiDEM partners who have already gained experience with the REDCap app or another EDC system had to be excluded, we did not include memory clinic facilities that had participated in an earlier study.
The qualitative evaluation of the Thinking Aloud results was based on content analysis. Therefore, the participants were filmed while performing the tasks and a screen capture video of the tablet was recorded. All recordings of the Thinking Aloud test were transcribed verbatim and analyzed according to the structured content analysis method developed by Mayring [
To ensure the trustworthiness of the qualitative data, we followed the checklist drawn up by Elo et al [
Subcategories for the main categories were defined during the analysis. This category system served as the basis for coding the remaining transcripts. Given the descriptive nature of the data, additional subcategories emerged during coding. This process continued until saturation of the category system was achieved [
Because issues in the category “usability problems” mainly influence software usability, these statements were weighted by two independent researchers (MR, MH) according to the severity rating formulated by Nielsen [
The SUS was evaluated using Brooke’s evaluation scheme [
Before this study, approval was obtained from the institutional review board, the Committee on Research Ethics of the Friedrich-Alexander-University Erlangen-Nürnberg (Germany), following all applicable regulations (346_20 Bc). Informed consent was obtained in writing from all participants beforehand. Participation in the study was voluntary and no incentives were offered for participating.
In total, 12 participants took part in our usability study (6 men and 6 women). The participants were distributed over different types of institutions: community counseling (2), support groups (1), flat-sharing communities (1), daycare facilities (2), outpatient care organizations (2), geriatric rehabilitation facilities (2), and research institutes (2). The age of the participants covered all five age groups: 18-24 years, 25-34 years, 35-44 years, 45-49 years, and >60 years (
When asked about experience with mobile devices (personal or professional), 10 out of 12 participants mentioned experience with a smartphone, half of the sample had experience using tablets, and one participant had no experience with any of the listed devices (
None of the participants had any experience in EDC systems or registry-based research studies. These were medical assistants, nursing assistants, home health aides, and volunteer assistants, with caring for or counseling people with dementia and family caregivers as their primary role.
Characteristics of the participants: age group, gender, and device experience.
Characteristics | Women (n=6), n | Men (n=6), n | Total (N=12), n | |
|
|
|
|
|
|
18-24 | 1 | 0 | 1 |
|
25-34 | 2 | 2 | 4 |
|
35-44 | 0 | 1 | 1 |
|
45-59 | 2 | 3 | 5 |
|
>60 | 1 | 0 | 1 |
|
|
|
|
|
|
Smartphone | 6 | 4 | 10 |
|
Tablet | 2 | 4 | 6 |
|
Desktop-PC | 4 | 4 | 8 |
|
Laptop | 6 | 3 | 9 |
|
None | 0 | 1 | 1 |
The time it took participants to complete the three tasks in the REDCap app varied from 19 to 27 minutes. The coding of the Thinking Aloud method transcripts resulted in a total of 160 statements coded, including 44 positive aspects, 57 suggestions for improvement, 50 usability problems, and 9 functionality problems (all statements were counted individually). The coded transcript can be found in
A total of 44 positive statements could be identified. As shown in
Seven out of 12 participants were optimistic about the app’s learnability and navigation: “Once you’ve done it a couple of times, you’re good with it” or “At the beginning, I was a bit confused...but now I understand it better already.” One research partner described the first steps from logging in to collecting data as “so even for me so far, foolproof.” Another positive aspect was the app’s feedback function for the project’s own built-in plausibility checks and warning messages: “Yes, that is also very helpful in any case, that you get the error message right away.” Feedback from the system such as receiving green symbols after successful saving of the questionnaires gave the participants a sense of security: “Yes and now it’s green. So, I assume that everything is saved.” In addition, some participants described the app as “clearly designed” and “well structured.”
Distribution of the subcategories for “positive aspects.”
Subcategory (positive aspects) | Participants, n |
Learnability | 7 |
Navigation | 7 |
Feedback | 6 |
Instructions | 5 |
Design | 4 |
Structure | 3 |
Two functionality problems were identified that were seen as most important. The first occurred while synchronizing the data collected offline. Two participants were unable to transfer the data. One participant commented: “‘Stop sending modified records to server’ [warning message in REDCap]—okay, let’s see what’s wrong. This should not happen, right?” Because the detailed description of the problem was in English, the participants could not solve it themselves. The problem arose because no data synchronization had occurred before the data collection. Therefore, the project in the app must be synchronized before the offline survey is carried out to prevent an interruption. The second problem became evident in the third task. The participant was supposed to conduct a follow-up interview in the record of the baseline interview (task 1). However, the participant created a new record and collected the follow-up interview data for a new study participant. The REDCap app should not offer this option as it requires further action to link the data to the existing data set from the baseline interview.
Special attention was paid to the category “usability problems,” mainly influencing the app’s usability. A total of 50 statements were categorized into 6 subcategories.
The REDCap app offers the option of translating the app’s interface using a language file. Nevertheless, not all terms have yet been translated into German. Some error and warning messages from the system are still in English, such as the message that appears when a user leaves a questionnaire without first saving it (
Categories of usability problems and the derived importance discovered in the test.
Usability problem | Severity rating | Number of mentions | Severity score |
Language | 3 | 6 | 18 |
Feedback | 3 | 4 | 12 |
Perceived offer character | 2 | 6 | 12 |
Inconsistent interaction design | 2 | 4 | 8 |
Navigation | 1 | 7 | 7 |
Knowledge error | 1 | 5 | 5 |
Screenshot of the REDCap app with a mix of German and English.
One participant was critical of the increased time required to interpret the English language: “I must ensure that I manage my time well and therefore I can’t constantly think about what that means.” The mixing of German and English bothered two users and resulted in some confusion: “It’s still a bit confusing. A few times it’s in German, then again in English. You have to switch quickly in your mind.” It is noticeable that the language barrier was complained about by participants from the two youngest age groups, who also achieved a lower SUS score.
The lack of information given to users about what is happening in the app (“feedback”) led to confusion during the survey and delayed task processing: “So here I’m not sure how to proceed.” This also led to uncertainty such as with regard to whether or not the synchronization task was successful: “For me, it’s unfortunately not apparent whether the data has been uploaded or not.” It is noteworthy that this usability problem only affected the participants who had no experience using tablets.
Participants were sometimes unaware of a function behind an interactive element such as a button (“perceived offer character”). For example, after logging into the app, the user must actively select a project to collect data, even if the user had been authorized only for one project. One participant described this situation as follows: “Next I go to ‘My Projects’ [4 second pause] Okay, the button’s missing, or I’m really having a blackout.” Especially in the older age group (45-59 years), buttons not being recognized as functions was a recurring problem.
In some situations, the participants expected a different function based on the design (“inconsistent interaction design”). The field type “date” was especially challenging for four participants. As shown in
Screenshot (in English and German) of a question with the field type "date".
Although many participants appreciated the navigation in the app, they felt some navigation procedures to be too complicated (“navigation”). In particular, the pathway to logging out of the app was felt to be too cumbersome to find: “So how do I get out of here now?” Some usability problems were also caused by knowledge gaps (“knowledge error”): “How do I see what survey date I’ve chosen?”
Some participants immediately provided suggestions for improvement after pointing out a problem. A total of 57 suggestions for improvement were identified, which were categorized into 5 subcategories (grouped by participants). Eight out of 12 participants suggested more explicit feedback, especially when synchronizing data during the second task: “Just a confirmation, for example, a pop-up message ‘Date successfully transferred,’ that I know I can go back.” Seven participants requested a “notes” field to collect additional data: “I would find it helpful to have the option to make notes quickly.” REDCap offers the function “field notes,” but it was not intuitive for the participants to find. Furthermore, 4 of the 8 participants would have found it more intuitive if they could have gone directly to the next question using the enter key: “For me, it would be helpful that it then jumps to the next question.” In addition, 5 out of 12 participants suggested an easier way to log out: “Simple ‘logout’ would have been clearer.” Four participants proposed a more flexible option for language selection, either to switch entirely to German or to have more language options for colleagues whose native language is neither German nor English: “I also have colleagues who speak German but come from another country. So maybe another translation is necessary.” Moreover, 3 out of the 12 participants suggested more color tones and graphic highlighting in the design. Another participant would have preferred an indication of the progress of data collection: “for example, ‘You have completed 18 of 20 survey forms’.”
Based on the usability problems found and suggestions for improvement, we identified implications and recommendations (
There are short-term solutions that the project team can provide, such as targeted user training for the identified issues or providing a test environment for users to familiarize themselves with the system. There is also long-term optimization potential that should be addressed by REDCap’s developers, such as including a user expertise-based help option within the app.
Identified problems of the usability test, and short- or long-term solutions.
Problem | Short-term solution (by the project) | Long-term optimization (by the developers) | |
|
|
|
|
|
Language | User training; provide a document with translations and explanations; check and adjust existing REDCap language file | Simplified language selection; complete translation, including system messages and synchronization report |
|
Feedback | User training; provide a test system | User expertise-based help |
|
Perceived offer character | User training; provide a test system | Optimization of user interface |
|
Inconsistent interaction design | User training; provide a test system | Enable input by typing or hide input field; provide keyboard type based on the field type (eg, numeric keyboard) |
|
Navigation | User training; create a short paper-based how-to guide | Simplification of navigation (eg, log out) |
|
Knowledge error | User training; provide filling out instructions directly in the questionnaire | User expertise-based help |
|
|
|
|
|
Data synchronization | User training; include a note at the end of the questionnaire to regularly synchronize the records | Sending notifications on the device in case of nonsynchronized records |
|
Follow-up interview | User training; create a short paper-based how-to guide for follow-up interviews | Optimized visit overview |
The overall SUS score of the REDCap app was 74. According to the Bangor classification, this represents “good usability” [
The majority of participants indicated that they would use the REDCap app frequently (8 of 12 participants). Furthermore, 8 participants found the app easy to use. Ten out of 12 participants could imagine that most people can quickly learn to use REDCap. Concerning the need for technical assistance in using the app, the results indicated good usability, as 10 of the 12 respondents disagreed or strongly disagreed that help was needed. Nevertheless, half of the participants felt the system was unnecessarily complex and 5 participants disagreed with the statement about feeling confident using the app. None of the negatively worded questions received a “strongly agree” rating.
Detailed System Usability Scale (SUS) scores for each participant (N=12).
SUS item | Participant number (age group, years) | Mean (SD) | Total | SUS score | ||||||||||||||
|
P1 (25-34) | P2 (18-24) | P3 (45-59) | P4 (45-59) | P5 (45-59) | P6 (25-34) | P7 (25-34) | P8 (35-44) | P9 (25-34) | P10 (45-59) | P11 (>60) | P12 (45-59) |
|
|
|
|||
I think that I would like to use this system frequently | 2 | 2 | 3 | 2 | 3 | 3 | 4 | 2 | 3 | 3 | 4 | 4 | 2.91 (0.79) | 35.0 | 87.5 | |||
I found the system unnecessarily complex | 2 | 3 | 2 | 1 | 2 | 3 | 4 | 4 | 4 | 1 | 4 | 3 | 2.75 (1.13) | 33.0 | 82.5 | |||
I thought the system was easy to use | 3 | 2 | 4 | 2 | 1 | 4 | 3 | 2 | 3 | 3 | 3 | 3 | 2.75 (0.86) | 33.0 | 82.5 | |||
I think that I would need the support of a technical person to be able to use this system | 3 | 3 | 4 | 3 | 1 | 4 | 4 | 3 | 3 | 1 | 4 | 3 | 3.00 (1.04) | 36.0 | 90.0 | |||
I found the various functions in this system were well integrated | 3 | 2 | 4 | 2 | 4 | 3 | 3 | 2 | 4 | 3 | 3 | 4 | 3.08 (0.79) | 37.0 | 92.5 | |||
I thought there was too much inconsistency in this system | 2 | 4 | 2 | 4 | 2 | 3 | 3 | 3 | 4 | 1 | 3 | 3 | 2.83 (0.75) | 34.0 | 85.0 | |||
I would imagine that most people would learn to use this system very quickly | 3 | 3 | 4 | 4 | 3 | 4 | 3 | 2 | 2 | 4 | 3 | 4 | 3.25 (0.75) | 39.0 | 97.5 | |||
I found the system very cumbersome to use | 2 | 3 | 4 | 3 | 2 | 4 | 3 | 3 | 4 | 4 | 4 | 4 | 3.33 (0.77) | 40.0 | 100 | |||
I felt very confident using the system | 2 | 1 | 3 | 1 | 2 | 4 | 3 | 2 | 3 | 4 | 4 | 3 | 2.66 (1.07) | 32.0 | 80.0 | |||
I needed to learn a lot of things before I could get going with this system | 2 | 3 | 2 | 4 | 2 | 4 | 3 | 3 | 3 | 4 | 4 | 4 | 3.16 (0.83) | 38.0 | 95.0 | |||
Total | 24 | 26 | 32 | 26 | 22 | 36 | 33 | 26 | 33 | 28 | 36 | 35 | N/Aa | N/A | N/A | |||
SUS score | 60 | 65 | 80 | 65 | 55 | 90 | 82.5 | 65 | 82.5 | 70 | 90 | 87.5 | N/A | N/A | N/A |
aN/A: not applicable.
An extended TAM was used to evaluate how the participants accept mobile devices for data collection. From a maximum 75-point scale (15 questions multiplied by the highest response value of 5), the participants scored an average of 60.5 points (SD 7.3). The number of items collected in the questionnaire was reduced to the following underlying factors that determine the items’ average scale value: perceived ease of use, perceived usefulness, social influence, facilitating conditions, and anxiety. The boxplot in
Distribution of the Technology Acceptance Model categories.
The success of a registry depends on the quality of the data collected. Most registry studies use pilot testing to evaluate the correct implementation of the questionnaires used in the EDC system. A usability evaluation of whether users can cope with the EDC system in the intended environment is often ignored. This is even more important because usability problems can affect whether an app is ultimately adopted or abandoned [
Most of the digiDEM partners had not yet had any experience in registry research. They are therefore considered to be “lay users,” as they are neither familiar with registry research studies nor using an app on a mobile device for data collection. Conventional EDC systems are intended to be used by professional registry research staff at a clinic site [
Our usability evaluation helped us to identify issues that could affect the usability of offline data collection with the REDCap app (
The language by which a system communicates with a user can have a major influence on usability [
In their study about the impact of usability on software design, Juristo et al [
All field types used in the questionnaire should be tested, especially when using nontext-based items such as date selection fields or visual scales [
The data synchronization process was identified as a major functionality usability problem. Uploading the data collected offline was also noted as problematic in a study of implementation strategies for the REDCap app by McIntosh et al [
Among the positive aspects of the qualitative usability evaluation, learnability was particularly highlighted. The participants who were not experienced in handling mobile devices became more familiar with the app and the tablet from task to task. The results of the quantitative SUS questionnaire confirmed the positive statements. For example, learnability was also one of the highest-rated items in the SUS. Given the widespread use of the SUS, a comparison with existing SUS study results is possible [
Good usability cannot always predict the likelihood of future use, as other factors may also play a role. For example, anxiety can lead to a system being perceived as not easy to use, even if it has been designed to be user-friendly [
By evaluating the usability and acceptance of app-based offline data collection at an early stage in our project, we were able to identify usability problems that need to be considered when introducing such a data collection method. As Qiu and Yu [
The advantages of a system with good usability, such as enhanced efficiency and user acceptance, less training effort, or higher data quality, are indisputable [
Only a few studies such as those by Ndlovu et al [
Walden et al [
Three limitations of the study should be acknowledged. First, the sample size (N=12) could be seen as too small for meaningful assessments to be extrapolated. Even though larger samples are usually recommended for quantitative studies, Tullis and Stetson [
Second, participation in the study was voluntary. In this case, only participants who had experience with mobile devices or technically interested participants can be presumed to have participated in the study. As shown by the participants’ characteristics (
Third, the usability testing was laboratory-based. We replicated a real scenario [
Offline registry data collection can be made more efficient through EDC systems, but attention must be paid to the usability of these systems. Despite the widespread use of usability tests in the health care and app environment, usability evaluations in the field of electronic data collection in registry-based research have so far remained scarce. Our study shows that it is profitable to conduct a usability evaluation of the EDC system considering future users and the project environment. Using a mixed method approach, we identified positive and negative aspects regarding the usability of an EDC app for offline data collection. By addressing these aspects, the registry project digiDEM Bayern can avoid pitfalls and realize the benefits of EDC systems, even in areas where using web-based EDC systems is not viable due to unreliable internet connectivity. The out-of-the-box use of the REDCap app resulted in a good usability rating, which can be further improved by addressing the identified issues by means of user training of digiDEM partners and improvements on the part of REDCap’s developers. The technology acceptance in the sample showed that heterogeneous groups of different ages with varying experiences in handling mobile devices are open to the use of app-based EDC systems. Based on these results, it can be assumed that the offline use of an app-based EDC system on mobile devices is a viable solution for collecting data in a registry-based research project.
Test survey in the REDCap mobile app.
Test manual with tasks and answers for the test survey.
Introductory training to the REDCap mobile app.
Online questionnaire with sociodemographic data, System Usability Scale, and technology acceptance model.
Thinking Aloud results (encoded segments).
Digital Dementia Registry Bavaria
electronic data capture
Research Electronic Data Capture
System Usability Scale
Technology Acceptance Model
This study was performed in (partial) fulfillment of the requirements for obtaining the degree “Dr. rer. biol. hum.” from the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) (MR). The project digiDEM Bayern is funded by the Bavarian State Ministry of Health and Care as part of the funding initiative “BAYERN DIGITAL II” (funding code G42d-G8300-2017/1606-83).
MR and MH planned, designed, and performed the usability evaluation. MR assisted during the design and evaluation phases. MR advised in the preparation phase of the study and assisted in collecting the evaluation data. MR coordinated input from the coauthors. MR participated in writing the paper. EG, PR, HP, and MR revised the first draft and provided valuable input and comments.
None declared.