Published on in Vol 5, No 11 (2021): November

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/26181, first published .
Usability-In-Place—Remote Usability Testing Methods for Homebound Older Adults: Rapid Literature Review

Usability-In-Place—Remote Usability Testing Methods for Homebound Older Adults: Rapid Literature Review

Usability-In-Place—Remote Usability Testing Methods for Homebound Older Adults: Rapid Literature Review

Review

1Department of Pharmacy Practice, College of Pharmacy, Purdue University, Indianapolis, IN, United States

2Indiana University Center for Aging Research, Regenstrief Institute, Indianapolis, IN, United States

3School of Medicine, Indiana University, Indianapolis, IN, United States

4Center for Health Innovation and Implementation Science, Indiana University, Indianapolis, IN, United States

5Department of Health and Wellness Design, School of Public Health-Bloomington, Indiana University, Bloomington, IN, United States

Corresponding Author:

Jordan R Hill, PhD

Department of Pharmacy Practice

College of Pharmacy

Purdue University

640 Eskenazi Ave

Indianapolis, IN, 46202

United States

Phone: 1 7655438559

Email: hill265@purdue.edu


Background: Technology can benefit older adults in many ways, including by facilitating remote access to services, communication, and socialization for convenience or out of necessity when individuals are homebound. As people, especially older adults, self-quarantined and sheltered in place during the COVID-19 pandemic, the importance of usability-in-place became clear. To understand the remote use of technology in an ecologically valid manner, researchers and others must be able to test usability remotely.

Objective: Our objective was to review practical approaches for and findings about remote usability testing, particularly remote usability testing with older adults.

Methods: We performed a rapid review of the literature and reported on available methods, their advantages and disadvantages, and practical recommendations. This review also reported recommendations for usability testing with older adults from the literature.

Results: Critically, we identified a gap in the literature—a lack of remote usability testing methods, tools, and strategies for older adults, despite this population’s increased remote technology use and needs (eg, due to disability or technology experience). We summarized existing remote usability methods that were found in the literature as well as guidelines that are available for conducting in-person usability testing with older adults.

Conclusions: We call on the human factors research and practice community to address this gap to better support older adults and other homebound or mobility-restricted individuals.

JMIR Form Res 2021;5(11):e26181

doi:10.2196/26181

Keywords



The Need for Remote Operations

Technology can support access to services, communication, and socialization for older adults and others whose mobility is restricted due to health-related risks such as susceptibility to disease (eg, COVID-19), disability, and a lack of resources (eg, transportation). However, the delivery, support, and evaluation of technologies that are used by homebound or mobility-restricted individuals require remote operations, including remote usability testing. Herein, we review the unique needs and technology opportunities of homebound older adults and the literature on remote usability testing methods. Based on our findings, we identified a gap in guidance for remote usability testing with older adults. Therefore, we call on relevant research and practice communities to address this gap.

Supporting Homebound Individuals With Technology

Over 2 million Americans are homebound due to an array of social, functional, and health-related causes, and this number is projected to grow as the size of the older population increases [1]. Situational factors such as inclement weather and, on a larger scale, pandemics or national disasters can also temporarily render individuals homebound. For example, in March 2020, the US Centers for Disease Control and Prevention [2] warned older adults to remain at home due to the disproportionate COVID-19–related health risks that they face. Prior to the pandemic, an estimated 1 in 4 older US adults were already socially isolated, and this rate has likely increased [3].

People who shelter in place or stay home for other reasons may turn to technology to access remote services, including remote banking, grocery shopping, and medical care services. The prevalence of these physically distant interactions is reportedly on the rise [4], especially for certain services. A prominent example is the increased frequency of patients’ telemedicine visits with health care professionals—a form of telehealth that has been long available but whose usage has increased dramatically in the United States, as the COVID-19 pandemic resulted in changes to federal reimbursement policies in March 2020 [5].

Testing Technology With Homebound Technology Users

When technology users are homebound, researchers and care practitioners who intend to test a technology’s usability in an ecologically valid manner must either travel to the user’s home or conduct remote testing. Travel is not always an option. Safety, health, or personal reasons may prevent researchers from entering a home or community. Travel may be too costly or otherwise impractical, or participants may live in an area that is inaccessible to the project team. During the COVID-19 pandemic for example, academic and practice-based project teams have anecdotally reported barriers to in-person visits, including the need to distance themselves from infected and at-risk individuals, members of project teams working from home, and the need to reduce travel expenses due to economic pressures. Even if in-person visits are possible, remote testing can also be more convenient and cost-efficient for all parties involved.


We performed a rapid review of studies involving remote usability testing methods for all users and those specifically for older adults and summarized their findings. Rapid reviews are an accepted knowledge synthesis approach that has become popular for understanding the most salient points on emerging or timely topics [6]. Rapid reviews typically do not include an exhaustive set of studies, do not involve formal analyses of study quality, and report findings from prior studies via narrative synthesis [7]. The primary goal of this review was to identify methods for performing remote usability assessments with older adults (if any existed). Secondarily, we wished to summarize the literature on existing remote usability methodologies for any population and existing guidelines on performing in-person usability testing with older adults. Sources for the second goal were largely retrieved while searching for sources to support the primary goal and via a secondary search within Google Scholar.

Our rapid review began with a keyword search on the Google Scholar and Science Direct scholarly databases. This was followed by a supplementary keyword search in top human factors journals and proceedings. Both searches are summarized in Table 1.

Table 1. Keywords that were searched for the rapid review.
Search type and sourcesKeywords
Primary search

Google Scholar (database)elderly remote usability, senior remote usability, and older adult remote usability

Science Direct (database)elderly remote usability, senior remote usability, and older adult remote usability
Secondary search

Ergonomicselderly remote usability, senior remote usability, and older adult remote usability

Human Factorselderly remote usability, senior remote usability, and older adult remote usability

Applied Ergonomicselderly remote usability, senior remote usability, and older adult remote usability

Human Factors and Ergonomics Society Conference Proceedingselderly remote usability, senior remote usability, and older adult remote usability

International Journal of Human-Computer Interactionelderly remote usability, senior remote usability, and older adult remote usability

International Journal of Human-Computer Studieselderly remote usability, senior remote usability, and older adult remote usability

Gerontechnologyelderly remote usability, senior remote usability, and older adult remote usability

Google Scholar (database)usability older adults, elderly usability, senior usability, and remote usability

We began with Google Scholar to take advantage of its relevance-based sorting feature and broader inclusion of diverse disciplines, academic and practice-based publications, and grey literature [8]. However, we conducted further searches because of the known limitations of Google Scholar, such as its lack of transparency and lack of specialization [9].

In the interest of establishing a starting point for understanding remote usability testing with older adults, we had broad inclusion criteria and did not restrict studies based on their date of publication or an analysis of their quality or peer-review status. We also defined remote usability broadly as usability assessments of participants (users) who were in separate locations from the researchers or practitioners. Duplicate studies, as well as studies in which usability was assessed by an expert (eg, heuristic analysis on a website) on behalf of older adults instead of through direct participant feedback, were excluded.

Two authors (JRH and JCB) performed the search in Google Scholar while one author (JCB) performed the search in Science Direct and the human factors sources. Both authors took notes in a shared cloud-based document. We chose a stopping rule based on the assumption that a narrative synthesis of literature is a form of qualitative content analysis [10]. Therefore, we concluded our search when we reached theoretical saturation [11]—a qualitative analysis stopping rule that means that the search continues until results begin to repeat and negligible new categories of information are produced through additional searching.


Summary of the Search Results

Of all of the sources found, 33 were screened in-depth (18 on remote usability methods and 15 on usability testing with older adults), and 21 were included in this review (16 on various remote usability methods and 5 on usability testing with older adults).

Importantly, sources that provided guidance or information on remote usability testing with older adults (the primary goal of this review) were not found. Therefore, we organized the results according to our secondary goals—summarizing existing methods for remote usability testing and outlining existing guidelines for in-person usability testing with older adults. In this Results and Discussion section, we combined the results with our interpretations and discussion to adhere to conventions for narrative reviews. We also present our overall conclusions in the Conclusions section.

Usability-In-Place: The State of the Practice of Remote Usability Testing

Studies on remote usability testing date back to the 1990s [12,13]. Since then, most traditional in-person usability evaluation methods have been attempted remotely. Remote moderated testing has been supported by advances in internet-based software, such as WebEx and NetMeeting, which permit simultaneous video and audio transmissions, screen sharing, and remote control [14,15]. Studies have also used novel methods, such as using virtual reality to simulate laboratory usability testing environments [16] and remotely capturing eye-tracking data [17]. Technologies for unmoderated testing have also evolved, as described elsewhere [18].

Asynchronous methods have long been used to overcome the barriers of time and space. Such methods include conducting self-administered survey questionnaires, using user diaries and incident reports, and obtaining voluntary feedback [19]. Studies have also used activity logging to passively collect use data for analyzing usability [20].

The major remote usability testing methods are described in Table 2 along with key findings from the literature. An important replicated finding was that the results from remote and in-person usability testing were generally similar, although significant differences may have appeared under extenuating circumstances, such as poor product usability or the cognitive difficulty of the usability testing tasks [21].

Table 2. Remote usability testing methods and key findings.
Remote usability testing methodDescriptionKey findings
Synchronous remote testing [14,15,20-23]In-person testing is simulated by using video and audio transmissions and remote desktop access.
  • Nearly identical to conventional in-person testing (with comparable results) [14,21-23]
  • Indirect cues and context can be missed [20]
  • Participants can prefer remote testing to in-person testing [22]
  • Management challenges (eg, network issues, remote troubleshooting, and setup) [15,20,22]
  • Users take longer to complete tasks than during in-person testing [15]
  • Users make more errors than during in-person testing [15]
Web-based questionnaires or surveys [14,20,21]Users fill out web-based questionnaires as they complete tasks or after the completion of tasks.
  • More time-consuming for usersa [14]
  • Less time-consuming for users than lab-based usability testing when usability is poora [21]
  • Overall usability rated lower when compared to lab-based usability testing [21]
  • Identifies fewer specific usability problems [14]
  • Enables the collection of data from many participants [20]
  • Validity problems with the self-report approach [20]
Postuse interview [24]Users are interviewed over the phone about the usability of a design (qualitative and quantitative data are collected) after they have completed tasks.
  • Beneficial for those with disabilities
  • Quantitative data collected are comparable to in-person testing data
  • Qualitative data are less rich compared to in-person testing data
  • In-person testing is better for formative testing; remote testing is better for summative evaluation
User-reported critical incidents or diaries [12,13,19,20]Users fill out a diary and take notes during a period of use or fill out an incident form when they identify a critical problem with an interface.
  • Able to capture most high- and moderate-severity incidentsa [12,13]
  • Users report fewer low-severity incidents than experts [12,13]
  • Validity problems with self-reports [20]
  • Issues may be underreported compared to those reported via traditional methodsa [19]
User-provided feedback [25]While completing timed tasks, users provide comments or feedback in a separate browser window. Once a task is complete, the user rates the difficulty of the completed task.
  • The percentage of participants who completed remote testing tasks was the same as the percentage of participants who completed in-person testing tasks
  • No difference in the time taken to complete tasks
  • Able to capture rich qualitative information through typed comments
  • Less observation data captured compared to those captured during in-person testing
  • Captured fewer usability issues in some cases compared to those captured during in-person testing
Log analysis [20]The actions taken by the user (eg, clicks) are captured for future analysis.
  • Less intrusive to user
  • Can collect data from many users
  • Unable to capture user intentions or additional context

aConflicting evidence has been found to support both the statement and its opposite in the literature.

The following general benefits of remote usability testing methods were identified:

  • Does not require a facility, thereby reducing the time requirements of participants and evaluators and lowering costs [20]
  • Can recruit participants from a broader geographic vicinity, thereby allowing evaluators to collect results from a larger and more diverse group of people (including those living in other countries or rural areas or those who are otherwise isolated) [14,23]
  • Allows participants to test technologies in a more realistic environment. For example, Petrie et al [24] had people with disabilities perform remote usability testing from the comfort of their own homes. The benefits thereof include the use of a home-based environment that is almost impossible to perfectly replicate in a lab.

Several drawbacks were also described, as follows:

  • General agreement that remotely collecting data results in a loss of some of the contextual information and nonverbal cues from participants that are collected during in-person evaluations [15,20,22,24,25]
  • Remote usability methods (especially asynchronous methods) appear to result in the identification of fewer usability problems, cause users to make more errors during testing, and are more time-consuming for users [14,15]. However, test participants identified about as many usability issues as those identified by the evaluators, but the participants’ categorization of the identified usability problems were deemed not useful. Contrarily, this was not found by Tullis et al [25] when they compared lab-based usability testing against remote usability testing.
  • Dray and Siegel [20] also listed validity problems with self-report methodologies, the inability of log files to distinguish the cause of navigation errors, and management challenges related to troubleshooting network issues and ensuring system compatibility as other drawbacks of remote usability testing. 
  • Many of the factors that may affect the validity, reliability, or efficiency of remote usability testing have not been scientifically studied [26]. These include factors such as the characteristics of users (eg, age and literacy), the effect of slow or unstable internet, the type of devices being used, and testing tactics (eg, verbal, printed, or on-screen instructions).

No matter the method, remote usability testing also involves challenges to implementing the methods in natural contexts, namely in home and community settings [27,28]. These challenges include recruiting a representative sample, especially among populations that may be less comfortable with using certain technology, have lower literacy, or are mistrustful of research [26]. McLaughlin and colleagues [26] proposed strategies such as providing access to phone support prior to the start of any web-based testing.

Remote Usability Testing With Older Adults

Prior work on remote usability testing has been performed with convenience samples of college students [13,14] or healthier and younger adults recruited from workplaces [22,23,25]. We found no published instance of fully remote usability testing with older adults. Diamantidis et al [29] conducted a test of a mobile health system with older individuals with chronic kidney disease. Participants received an in-person tutorial of the system; they used the system at home, received physical materials by mail, and completed a paper diary. Afterward, they returned to complete an in-person satisfaction survey. Petrie et al [24] reported 2 case studies of remote usability testing—one with blind younger adults (n=8) and another with a more heterogeneous group of individuals with disabilities (n=51). They demonstrated the feasibility of remote testing and showed comparable results between in-person and remote testing, although in-person participants in the second study reported more usability problems with the tested website.

Others have described ways to improve in-person usability testing with older adults that may be transferable to remote methods. For example, touch screen devices and hardware that is selected for simplicity may produce better usability testing results with older adults [30-32] and can therefore reduce barriers to remote usability testing. Additionally, the use of large closed captions during a remote testing session has been recommended for older users with visual or hearing impairments. Holden [33] published a Simplified System Usability Scale that was modified for and tested with older adults and those with cognitive disabilities but did not demonstrate its use in remote testing.

Older adults in remote usability tests may also benefit from non–age-specific strategies for optimizing remote usability testing [34]. These recommendations, which are summarized in Figure 1, include mailing a written copy of instructions, conducting web-based training prior to testing sessions, and sending reminders.

Figure 1. General guidelines for conducting moderated remote usability testing (adapted from the Nielsen Norman Group [34]).
View this figure

Our rapid review and synthesis of the literature revealed that remote usability testing still appears to be an emerging field [26] whose great potential is accentuated during major events, such as the COVID-19 pandemic. The decision to pursue the further development of and research on remote usability testing is straightforward, given the apparent advantages, validity, and feasibility of remote usability testing and the need for the method.

The method however must be adapted to and tested with older adults. The use of technology for remote services among older adults in the United States has been increasing [35,36], as has older adults’ proficiency with internet-based technology [37]. A Pew Research Center national survey reported increases in internet use (from 12% to 67%) and the adoption of home broadband (from 0% to 51%) from 2000 to 2016, as well as increases in smartphone (from 11% to 42%) and tablet (from 1% to 32%) ownership from 2011 to 2016 [4]. However, the older adult population is diverse and has different needs compared to those of other groups when it comes to technology and the usability testing of technology. US adults aged 65 years are more likely than their younger counterparts to experience difficulties with physical or cognitive function, including reduced memory capacity, stiff joints or arthritis, and vision or hearing disability [38,39]. These factors and the discomfort with or reduced motivation to use technology elevate the importance of usability testing [40] but ironically may increase the difficulty of conducting remote usability testing. Additional recommendations and best practices will thus be needed to ensure effective and efficient remote usability testing with older adults.

We call on human factors, human-computer interaction, and digital health communities to further develop, describe, and test remote usability testing approaches that will be suitable across diverse populations, including older adults, those with lower literacy or health literacy, and individuals with cognitive or physical disabilities. Progress toward this goal will not only better support homebound or mobility-restricted individuals but may also improve the efficiency, ecological validity, and effectiveness of usability testing in general.

Acknowledgments

We thank Dr Anne McLaughlin and the members of the Brain Safety Lab for their input. The authors of this paper were supported by grant R01 AG056926 from the National Institute on Aging of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We thank the three anonymous reviewers.

Authors' Contributions

JRH and RJH conceived this paper. All authors wrote and edited this paper and approved the final version.

Conflicts of Interest

None declared.

  1. Ornstein KA, Leff B, Covinsky KE, Ritchie CS, Federman AD, Roberts L, et al. Epidemiology of the homebound population in the United States. JAMA Intern Med 2015 Jul;175(7):1180-1186 [FREE Full text] [CrossRef] [Medline]
  2. COVID-19 risks and vaccine information for older adults. Centers for Disease Control and Prevention.   URL: https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/older-adults.html# [accessed 2020-08-27]
  3. Cudjoe TKM, Kotwal AA. "Social Distancing" amid a crisis in social isolation and loneliness. J Am Geriatr Soc 2020 Jun;68(6):E27-E29 [FREE Full text] [CrossRef] [Medline]
  4. Anderson M, Perrin A. Technology use among seniors. Pew Research Center. 2017 May 17.   URL: https://www.pewresearch.org/internet/2017/05/17/technology-use-among-seniors/ [accessed 2020-11-24]
  5. Wosik J, Fudim M, Cameron B, Gellad ZF, Cho A, Phinney D, et al. Telehealth transformation: COVID-19 and the rise of virtual care. J Am Med Inform Assoc 2020 Jun 01;27(6):957-962 [FREE Full text] [CrossRef] [Medline]
  6. Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al. A scoping review of rapid review methods. BMC Med 2015 Sep 16;13:224 [FREE Full text] [CrossRef] [Medline]
  7. Grant M, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J 2009 Jun;26(2):91-108 [FREE Full text] [CrossRef] [Medline]
  8. Yasin A, Fatima R, Wen L, Afzal W, Azhar M, Torkar R. On using grey literature and Google Scholar in systematic literature reviews in software engineering. IEEE Access 2020;8:36226-36243 [FREE Full text] [CrossRef]
  9. Shultz M. Comparing test searches in PubMed and Google Scholar. J Med Libr Assoc 2007 Oct;95(4):442-445 [FREE Full text] [CrossRef] [Medline]
  10. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, Peacock R. Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review. Soc Sci Med 2005 Jul;61(2):417-430. [CrossRef] [Medline]
  11. Strauss A, Corbin J. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 2nd ed. Thousand Oaks, California: Sage; 1998.
  12. Castillo JC, Hartson HR, Hix D. Remote usability evaluation: Can users report their own critical incidents? 1998 Apr Presented at: CHI98: ACM Conference on Human Factors and Computing Systems; April 18-23, 1998; Los Angeles, California, USA p. 253-254. [CrossRef]
  13. Hartson HR, Castillo JC. Remote evaluation for post-deployment usability improvement. 1998 May Presented at: AVI98: Advanced Visual Interface '98; May 24-27, 1998; L'Aquila, Italy p. 22-29. [CrossRef]
  14. Andreasen M, Nielsen H, Schrøder S, Stage J. What happened to remote usability testing? An empirical study of three methods. 2007 Presented at: CHI '07: SIGCHI Conference on Human Factors in Computing Systems; April 28 to May 3, 2007; San Jose, California, USA p. 1405-1414. [CrossRef]
  15. Thompson KE, Rozanski EP, Haake AR. Here, there, anywhere: Remote usability testing that works. 2004 Oct Presented at: SIGITE04: ACM Special Interest Group for Information Technology Education Conference 2004; October 28-30, 2004; Salt Lake City, Utah, USA p. 132-137. [CrossRef]
  16. Madathil KC, Greenstein JS. Synchronous remote usability testing: a new approach facilitated by virtual worlds. 2011 Presented at: CHI '11: SIGCHI Conference on Human Factors in Computing Systems; May 7-12, 2011; Vancouver, British Columbia, Canada. [CrossRef]
  17. Chynał P, Szymański JM. Remote usability testing using eyetracking. 2011 Presented at: IFIP Conference on Human-Computer Interaction – INTERACT 2011; September 5-9, 2011; Lisbon, Portugal p. 356-361   URL: https://doi.org/10.1007/978-3-642-23774-4_29 [CrossRef]
  18. Whitenton K. Tools for unmoderated usability testing. Nielsen Norman Group. 2019 Sep 22.   URL: https://www.nngroup.com/articles/unmoderated-user-testing-tools/ [accessed 2020-11-24]
  19. Bruun A, Gull P, Hofmeister L, Stage J. Let your users do the testing: a comparison of three remote asynchronous usability testing methods. 2009 Apr Presented at: CHI '09: CHI Conference on Human Factors in Computing Systems; April 4-9, 2009; Boston, Massachusetts, USA p. 1619-1628. [CrossRef]
  20. Dray S, Siegel D. Remote possibilities?: international usability testing at a distance. Interactions (NY) 2004;11(2):10-17. [CrossRef]
  21. Sauer J, Sonderegger A, Heyden K, Biller J, Klotz J, Uebelbacher A. Extra-laboratorial usability tests: An empirical comparison of remote and classical field testing with lab testing. Appl Ergon 2019 Jan;74:85-96. [CrossRef] [Medline]
  22. Bernheim Brush AJ, Ames M, Davis J. A comparison of synchronous remote and local usability studies for an expert interface. 2004 Presented at: CHI04: CHI 2004 Conference on Human Factors in Computing Systems; April 24-29, 2004; Vienna, Austria. [CrossRef]
  23. Hammontree M, Weiler P, Nayak N. Remote usability testing. Interactions (NY) 1994 Jul;1(3):21-25. [CrossRef]
  24. Petrie H, Hamilton F, King N, Pavan P. Remote usability evaluations with disabled people. 2006 Presented at: CHI '06: SIGCHI Conference on Human Factors in Computing Systems; April 22-27, 2006; Montréal, Québec, Canada p. 1133-1141. [CrossRef]
  25. Tullis T, Fleischman S, Mcnulty M, Cianchette C, Bergel M. An empirical comparison of lab and remote usability testing of Web sites. 2002 Presented at: Usability Professionals’ Association 2002 Conference Proceedings; 2002; Bloomingdale, Illinois, USA.
  26. McLaughlin AC, DeLucia PR, Drews FA, Vaughn-Cooke M, Kumar A, Nesbitt RR, et al. Evaluating medical devices remotely: Current methods and potential innovations. Hum Factors 2020 Nov;62(7):1041-1060 [FREE Full text] [CrossRef] [Medline]
  27. Holden RJ, Scott AMM, Hoonakker PLT, Hundt AS, Carayon P. Data collection challenges in community settings: insights from two field studies of patients with chronic disease. Qual Life Res 2015 May;24(5):1043-1055 [FREE Full text] [CrossRef] [Medline]
  28. Valdez RS, Holden RJ. Health care human factors/ergonomics fieldwork in home and community settings. Ergon Des 2016 Oct;24(4):4-9 [FREE Full text] [CrossRef] [Medline]
  29. Diamantidis CJ, Ginsberg JS, Yoffe M, Lucas L, Prakash D, Aggarwal S, et al. Remote usability testing and satisfaction with a mobile health medication inquiry system in CKD. Clin J Am Soc Nephrol 2015 Aug 07;10(8):1364-1370 [FREE Full text] [CrossRef] [Medline]
  30. Murata A, Iwase H. Usability of touch-panel interfaces for older adults. Hum Factors 2005;47(4):767-776. [CrossRef] [Medline]
  31. Page T. Touchscreen mobile devices and older adults: a usability study. Int J Hum Factors Ergon 2014;3(1):65. [CrossRef]
  32. Ziefle M, Bay S. How older adults meet complexity: Aging effects on the usability of different mobile phones. Behav Inf Technol 2005 Sep;24(5):375-389. [CrossRef]
  33. Holden RJ. A Simplified System Usability Scale (SUS) for Cognitively Impaired and Older Adults. In: Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care. 2020 Sep 16 Presented at: 2020 International Symposium on Human Factors and Ergonomics in Health Ca; March 8-11, 2020; Toronto, Ontario p. 180-182. [CrossRef]
  34. Moran K, Pernice K. Remote moderated usability tests: How to do them. Nielsen Norman Group. 2020 Apr 26.   URL: https://www.nngroup.com/articles/moderated-remote-usability-test/ [accessed 2020-11-24]
  35. Baker S, Warburton J, Waycott J, Batchelor F, Hoang T, Dow B, et al. Combatting social isolation and increasing social participation of older adults through the use of technology: A systematic review of existing evidence. Australas J Ageing 2018 Sep;37(3):184-193. [CrossRef] [Medline]
  36. Lai HJ. Investigating older adults’ decisions to use mobile devices for learning, based on the unified theory of acceptance and use of technology. Interactive Learning Environments 2018 Nov 21;28(7):890-901. [CrossRef]
  37. Morrow-Howell N, Galucia N, Swinford E. Recovering from the COVID-19 pandemic: A focus on older adults. J Aging Soc Policy 2020;32(4-5):526-535. [CrossRef] [Medline]
  38. Barnard Y, Bradley MD, Hodgson F, Lloyd AD. Learning to use new technologies by older adults: Perceived difficulties, experimentation behaviour and usability. Comput Human Behav 2013 Jul;29(4):1715-1724. [CrossRef]
  39. Wildenbos GA, Peute L, Jaspers M. Aging barriers influencing mobile health usability for older adults: A literature based framework (MOLD-US). Int J Med Inform 2018 Jun;114:66-75. [CrossRef] [Medline]
  40. Brown J, Kim HN. Validating the usability of an Alzheimer’s caregiver mobile app prototype. 2020 Presented at: 2020 IISE Annual Conference; May 30 to June 2, 2020; New Orleans, Louisiana, USA.

Edited by A Kushniruk; submitted 01.12.20; peer-reviewed by A Federman, K Yin, L Perrier; comments to author 23.12.20; revised version received 25.01.21; accepted 02.09.21; published 02.11.21

Copyright

©Jordan R Hill, Janetta C Brown, Noll L Campbell, Richard J Holden. Originally published in JMIR Formative Research (https://formative.jmir.org), 02.11.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.