Published on in Vol 6, No 9 (2022): September

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/34212, first published .
The Effect of Persuasive Design on the Adoption of Exposure Notification Apps: Quantitative Study Based on COVID Alert

The Effect of Persuasive Design on the Adoption of Exposure Notification Apps: Quantitative Study Based on COVID Alert

The Effect of Persuasive Design on the Adoption of Exposure Notification Apps: Quantitative Study Based on COVID Alert

Original Paper

1School of Public Health Sciences, Faculty of Health, University of Waterloo, Waterloo, ON, Canada

2Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada

3Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada

4Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada

5eHealth Innovation, Techna Institute, University Health Network, Toronto, ON, Canada

Corresponding Author:

Plinio Pelegrini Morita, PEng, MSc, PhD

School of Public Health Sciences

Faculty of Health

University of Waterloo

200 University Avenue West

Waterloo, ON, N2L 3G1

Canada

Phone: 1 5198884567 ext 41372

Email: plinio.morita@uwaterloo.ca


Background: The adoption of contact tracing apps worldwide has been low. Although considerable research has been conducted on technology acceptance, little has been done to show the benefit of incorporating persuasive principles.

Objective: This research aimed to investigate the effect of persuasive features in the COVID Alert app, created by Health Canada, by focusing on the no-exposure status, exposure status, and diagnosis report interfaces.

Methods: We conducted a study among 181 Canadian residents, including 65 adopters and 116 nonadopters. This study was based on screenshots of the 3 interfaces, of which each comprised a persuasive design and a control design. The persuasive versions of the first two interfaces supported self-monitoring (of exposure levels), and that of the third interface supported social learning (about how many other users have reported their diagnosis). The 6 screenshots were randomly assigned to 6 groups of participants to provide feedback on perceived persuasiveness and adoption willingness.

Results: A multivariate repeated-measure ANOVA showed that there is an interaction among interface, app design, and adoption status regarding the perceived persuasiveness of the interfaces. This resulted in a 2-way ANOVA for each interface. For the no-exposure interface, there was an interaction between adoption status and app design. Among adopters, there was no significant difference P=.31 between the persuasive design (mean 5.36, SD 1.63) and the control design (mean 5.87, SD 1.20). However, among nonadopters, there was an effect of app design (P<.001), with participants being more motivated by the persuasive design (mean 5.37, SD 1.30) than by the control design (mean 4.57, SD 1.19). For the exposure interface, adoption status had a main effect (P<.001), with adopters (mean 5.91, SD 1.01) being more motivated by the designs than nonadopters (mean 4.96, SD 1.43). For the diagnosis report interface, there was an interaction between adoption status and app design. Among nonadopters, there was no significant difference P=.99 between the persuasive design (mean 4.61, SD 1.84) and the control design (mean 4.77, SD 1.21). However, among adopters, there was an effect of app design (P=.006), with participants being more likely to report their diagnosis using the persuasive design (mean 6.00, SD 0.97) than using the control design (mean 5.03, SD 1.22). Finally, with regard to willingness to download the app, pairwise comparisons showed that nonadopters were more likely to adopt the app after viewing the persuasive version of the no-exposure interface (13/21, 62% said yes) and the diagnosis report interface (12/17, 71% said yes) than after viewing the control versions (3/17, 18% and 7/16, 44%, respectively, said yes).

Conclusions: Exposure notification apps are more likely to be effective if equipped with persuasive features. Incorporating self-monitoring into the no-exposure status interface and social learning into the diagnosis report interface can increase adoption by >30%.

JMIR Form Res 2022;6(9):e34212

doi:10.2196/34212

Keywords



Background

The COVID-19 pandemic resulted in the imposition of public health restrictions and the shutting down of several economies by most national governments worldwide. This necessitated the rollout of digital contact tracing apps to curb the spread of the coronavirus. Digital contact tracing apps help notify users who may have come in contact with someone with COVID-19 so that appropriate safety measures such as self-isolation and testing for COVID-19 can be taken [1]. They were mostly rolled out in high-income countries to support manual methods of contact tracing, which are often labor-intensive, time-consuming, and less likely to be accurate because of the limitation of human memories in recalling contacts [2]. They have the potential to reach a critical mass of adopters and are hence more likely to be effective than traditional means of contact tracing. The emergence of new variants of COVID-19 such as Delta variant [3], which may be resistant to vaccines [4], and its endemic potential are an indication that contact tracing apps may continue to be relevant in the fight against COVID-19 in the long term [5,6]. However, their adoption has been very low and slow owing to several factors [7].

Apart from trust- and privacy-related concerns, the minimalist design of contact tracing apps currently on the Google and Apple app stores tends to limit their perceived usefulness [8]. As noted by Kukuk [9], [a]part from providing receiving notifications about possible infections, current contract tracing apps appear to not provide a clear benefit to the user.” Digital health experts have identified the lack of persuasive design and motivational affordances as being partly responsible for the low acceptance of contact tracing apps worldwide [7,10]. Research has shown that 56% of the population (eg, in a given country) may have to use contact tracing apps to considerably slow the spread of the virus [11]. Hence, there is a need for researchers to investigate ways to improve the design of contact tracing apps and increase their effectiveness. The minimalist design of contact tracing apps [8,12] (eg, users not being able to track the number of contacts and exposure time) might have been occasioned by the need to minimize collected user data to reduce privacy concerns [13,14] and eliminate fear of government surveillance [15]. Although, this can be seen as an advantage, it has also reduced the usefulness of contact tracing apps [9]. Research has shown that some users may be willing to provide more of their data to contact tracing apps (eg, location data) to receive additional benefits, such as the ability to track the number of daily contacts they had and COVID-19 hot spots [16,17]. The willingness of some users to provide more user data than others to have access to more useful features is an indication of the need for contact tracing apps tailored to different target groups [10,18].

Persuasive Design

We argued that the incorporation of persuasive features such as self-monitoring, social learning, tailoring, personalization, expertise, praise, and reward has the potential to improve the perceived persuasiveness of contact tracing apps and the reporting of COVID-19 diagnoses [18]. However, there is limited research on the effectiveness of the persuasive design of contact tracing apps in motivating behavior change. Most prior studies [19-21] did not focus on incorporating persuasive features in contact tracing apps. Rather, they focused on the Technology Acceptance Model (TAM), which does not consider persuasive design attributes. From the viewpoint of the TAM, we argue that the perceived usefulness of existing contact tracing and exposure notification apps through persuasive design has been relegated to the background [9,10]. One plausible explanation for this oversight was the need to roll out contact tracing apps as soon as possible to help flatten the curve.

To bridge the gaps in the extant literature, we proposed design guidelines for incorporating persuasive features in exposure notification apps (see our conceptual paper [18]). The guidelines were drawn from the persuasive system design (PSD) model by Oinas-Kekkonen and Harjumaa [22], which is commonly used in designing, implementing, and evaluating persuasive systems [23,24]. In this study, we implemented and evaluated the perceived persuasiveness of 2 of the proposed persuasive features (self-monitoring and social learning) from our conceptual paper [18], using the Government of Canada’s COVID Alert app as proof of concept [25]. The app was created by Health Canada in collaboration with Blackberry that provided privacy and security guidance [26]. We chose only 2 persuasive strategies because we could not implement and evaluate all persuasive strategies in the PSD model at the same time, and we had to start from somewhere. In particular, we chose self-monitoring because prior work, such as that by Cruz et al [17], reported that contact tracing app users would like to know the number of persons they have come in contact with. Second, we chose social learning because we believed that learning about the number of other users in your community who have reported their COVID-19 diagnosis holds the potential to motivate users to report theirs when they test positive. Moreover, prior research on persuasive technology has demonstrated that social learning has the capacity to motivate people to engage in beneficial behaviors regardless of culture, gender, or age [27,28]. The rationale for choosing self-monitoring and social learning is discussed in further detail in our prior conceptual paper, which focused on designing exposure notification apps as persuasive technologies [18].

Study Description

We conducted a survey on Amazon Mechanical Turk among 204 participants residing in Canada to investigate the effect of persuasive design on the adoption and perceived persuasiveness of COVID Alert. This study was based on 2 sets of app designs (persuasive and control), 3 types of use cases (no-exposure status interface, exposure status interface, and diagnosis report interface), and 2 types of participants (COVID Alert adopters and nonadopters). The persuasive design supports persuasive features, such as self-monitoring and social learning, whereas the control design does not support any persuasive features. Self-monitoring, which is incorporated into the no-exposure and exposure status interfaces of the COVID Alert app, is one of the most commonly used and effective persuasive strategies in behavior change [29-31]. It provides users with opportunities for self-reflection and self-regulation, which result in increased focus and commitment to achieving a target behavior such as social distancing. Moreover, social learning, which is integrated into the diagnosis report interface, is an effective persuasive strategy for motivating behavior change through social influence and pressure [32]. To evaluate the effectiveness of persuasive design, we carried out a 4-factor multivariate repeated-measure ANOVA (RM-ANOVA) [33] based on interface, app design, adoption status, and perceived persuasiveness. Our overall hypothesis is that the persuasive design of exposure notification apps, regardless of the use case (interface), is more likely to be persuasive and adopted by potential users than the control design. Moreover, we hypothesize that adopters are more likely to find exposure notification apps persuasive than nonadopters, regardless of app design and use case.

Related Work

Overview

Before conducting this research, we searched 6 databases (Scopus, CINAHL, PubMed [MEDLINE], IEEE Xplore Digital Library, ACM Digital Library, and Web of Science) between October 30, 2020, and November 20, 2020, using the following terms: (contact tracing OR contact-tracing OR exposure notification OR exposure-notification OR contact notification OR contact-notification OR GAEN) AND (app OR apps OR application* OR technolog* OR system OR systems) AND (percept* OR adopt* OR accept* OR uptake OR use OR usage) AND (covid* OR coronavirus OR SARS-CoV-2). In addition, we searched Google Scholar between November 21, 2020, and January 31, 2021, using terms such as COVID-19 contact tracing app and COVID-19 exposure notification app. The systematic review, which uncovered the key factors that drive the acceptance of contact tracing apps, is published in Frontiers in Digital Health [34]. The protocol for this review was published in the Journal of Medical Internet Research [35]. In this study, we review the key related articles retrieved from the database search, focusing on privacy, trust, and persuasive design.

Privacy and Trust

Privacy and trust are among the top-ranking ethical issues that COVID-19 stakeholders such as researchers, designers, and the public are concerned with when it comes to digital contact tracing [36-38]. In the context of web-based systems, privacy refers to the level of protection and security of user data and interaction while using an electronic system connected to the internet. It entails the collection, storage, use, and sharing of a user’s personal information [39]. In contrast, trust (despite not having a universally accepted scholarly definition [40]), in the context of web-based activities, is regarded as a cognitive mechanism adopted by users when interacting with internet-connected systems. Usually connected to the perceived quality, usability, and expertise of a web-based system such as a website, trust “operates to reduce the amount of [perceived] risk by reducing perceptions of anxiety and uncertainty” [40]. Preliminary research shows that there is a significant relationship between privacy concerns and trust, with each having the potential to impact the adoption of web-based systems, such as social networking sites [41,42] and e-commerce sites [43,44]. For example, Zlatolas et al [41] found that the higher the perceived privacy risk of using Facebook, the lower the perceived trust of users, and the lower the perceived trust in a social media site, the higher the privacy concerns of users. Trust is often associated with the success or failure of an e-commerce website, as web-based shoppers are concerned with unsafe products, insecure payment methods, loss of privacy, identity theft, and misuse of personal information [45].

In the contact tracing domain, research has also shown that privacy concerns and trust can impact the adoption of contact tracing apps [38]. For example, Sharma et al [19], Altmann et al [21], Kaspar [46], and Velicia-Martin et al [47] found in their work on technology acceptance that the higher people’s concern about privacy is, the less likely they are to download, install, or use contact tracing apps. Moreover, Sharma et al [19], Altmann et al [21], and Kaspar [46] found that the higher the users’ perceived trust in contact tracing apps and their stakeholders, such as the government, the higher their likelihood of adopting them. In contrast, Jonker et al [48] and Thomas et al [49] found that the higher the distrust of users (eg, in governments and tech companies [50]), the less likely they are to adopt contact tracing apps. Hence, as a way of enacting privacy protection, Jonker et al [48] recommended that governments implement contact tracing apps with adequate realistic privacy-preserving features; for example, users should be given control over their data, including deciding what data they want to share, whom they want to share it with, how and when they want to share it, and what it will be used for. Similarly, Walrave et al [20] recommended that contact tracing app sponsors inform potential users about the data to be collected and minimize data collection and the amount of time required to read and evaluate privacy terms by using visual presentation to improve comprehension. Finally, in furthering and fostering public trust, Altmann et al [21] recommended that national governments around the world should consider delegating the mandate of digital contact tracing to reputable and transparent public health institutions, over which they have little to no control.

Persuasive Design

Although a substantial amount of work has been done with regard to the impact of privacy and trust on contact tracing app adoption (as shown in the previous subsection), little has been done with regard to the impact of persuasive design. As of the time of writing this paper, we found only 2 studies [17,48] that investigated the benefit of incorporating persuasive features in contact tracing apps. One of the studies (Cruz et al [17]) found that more than half of the participants wanted to know how many infected people they had come in contact with (including the location and time) by way of self-monitoring. The study also found that most participants were more willing to share their locations when they were offered tangible rewards [17]. Similarly, another study (Jonker et al [48]) found that participants preferred contact tracing apps that offer tangible rewards, such as money and free COVID-19 testing. However, these studies were primarily based on contact tracing app descriptions and not implementations. Moreover, these studies were not based on a comparative analysis of intervention designs (equipped with persuasive strategies) or control designs (unequipped with persuasive strategies). Most importantly, the studies were carried out in the first half of 2020, when many people were less familiar with or had not used contact tracing and exposure notification apps. Hence, there is a need for this study to bridge the gap in the extant literature regarding the effect of persuasive design on contact tracing and exposure notification app design.


In this section, we focus on app design, measurement instruments, recruitment of participants, experimental design and data analysis, sample size calculation, and research model and hypotheses.

App Design

COVID Alert is the Government of Canada’s official app for contact tracing and exposure notification. Released on July 31, 2020, it uses Google/Apple Exposure Notification application programming interfaces to enforce strong privacy measures. Hence, it does not track the user’s location or collect personally identifiable information such as name, contacts, address, or health information. Similar to many exposure notification apps on the market, the COVID Alert app (persuasive or control design) comprises 3 key use cases: no-exposure status interface, exposure status interface, and diagnosis report interface (Figures 1 and 2). In the persuasive design, we implemented 2 types of persuasive strategies (self-monitoring and social learning) drawn from the PSD model [22]. The PSD model is a framework for the design, implementation, and evaluation of persuasive systems. It comprises 28 persuasive strategies. In our conceptual paper on exposure notification app design [18], we discuss likely persuasive strategies from the PSD model that can be incorporated into exposure notification apps to make them more effective and appealing. These include self-monitoring, tailoring, social learning, normative influence, trustworthiness, and authority. The rationale for implementing these strategies is described in the conceptual paper. In this study, we implemented the aforementioned strategies by focusing on self-monitoring (incorporated into the no-exposure and exposure status interfaces) and social learning (incorporated into the diagnosis report interface).

As shown in Figure 1, the no-exposure status interface informs the user that they have not been exposed to COVID-19 by being close to someone with COVID-19 in the last 14 days. The exposure status interface notifies the user that they may have been exposed to COVID-19 by being in close contact with someone with COVID-19, and provides information on what to do next (eg, self-isolate or go test for COVID-19 in the event of having symptoms). Finally, the diagnosis report interface enables a user who has tested positive to enter a one-time key given to them by the public health authority. We regard these 3 key original interfaces of the COVID Alert app, which are not equipped with persuasive features, as control designs (Figure 1).

Figure 2 shows the corresponding persuasive designs equipped with persuasive features. The no-exposure and exposure status interfaces are equipped with self-monitoring, and the diagnosis report interface is equipped with social learning. Self-monitoring is a persuasive feature that allows users to track their COVID-19 exposure levels over time. Figure 3 [34,51,52] illustrates the operational mechanism of self-monitoring. A person observes their own behavior and reflects on it, as though they are looking at themselves in the mirror. If they are not impressed with what they see (in the mirror), they regulate themselves by improving on the target behavior [29,53,54]. In the no-exposure status interface, users can track total and average number of daily contacts and minutes exposed. In the exposure status interface, users can view the cumulative sum of contacts and exposure minutes in the last 14 days within which they must have been exposed. It is hoped that by seeing these summary statistics, users will be motivated to regulate their social distancing behavior. In contrast, social learning is a persuasive feature that allows users to be aware of other people’s behavior in the hope that they will be socially pressured and motivated to adopt the observed behavior. Figure 3 illustrates the operational mechanisms of social learning [53,55,56]. Social learning is based on the premise that observational learning cannot occur unless cognitive processes that mediate the learning process occur [52]. Figure 3 demonstrates that by observing others' behavior, one is motivated through social pressure to imitate the observe behavior for the common good. In the diagnosis report interface, the app informs the user about the number of users who have reported their COVID-19 diagnosis on a given day in the hope that they would be socially pressured to report if they tested positive to promote public health safety.

Figure 1. Control designs of the 3 key interfaces of the COVID Alert app.
View this figure
Figure 2. Persuasive designs of the 3 key interfaces of the COVID Alert app.
View this figure
Figure 3. The operational mechanism of self-monitoring and social learning [34,51,52].
View this figure

Measurement Instruments

To investigate the effectiveness of the persuasive design, we measured 2 key constructs of interest: perceived persuasiveness of each of the interfaces (shown in Figures 1 and 2) and participants’ willingness to download the COVID Alert app from the app store. Table 1 shows the measures for both the constructs. Perceived persuasiveness refers to and measures the ability of the visual and informational design of an app to motivate users to adopt it. In this study, perceived persuasiveness is a reflective measure that captures how well the visual design of the COVID Alert app convinces and influences the user to start or continue using the app.

Table 1. Measurement instruments.
ConstructItems measuring construct
Perceived persuasiveness (“strongly disagree: 1” to “strongly agree: 7”) [57]The app design (name of interface)...
  1. …influences me to start or continue using the COVID Alert app.
  2. …is convincing for me to start or continue using the COVID Alert app.
  3. …is relevant to my using or continued use of the COVID Alert app.
Willingness to download app from store (yes or no)Now that I know about the COVID Alert app as the Government of Canada’s official exposure notification app, I will download it from the Apple or Google store to slow down the spread of the coronavirus.
Adoption statusWhich of the following best describes you?
  1. I am currently using the COVID Alert app.
  2. I am currently using a COVID-19 contact tracing or exposure notification app other than COVID Alert.
  3. I am not currently using any COVID-19 contact tracing or exposure notification app.

In the context of this study, perceived persuasiveness can be viewed as a proxy for the TAM or Theory of Planned Behavior constructs such as perceived usefulness [56,58], perceived compatibility with existing experiences, values, and tasks [59,60], and peer or superior influence [61], which have the potential to impact the adoption of new technologies. For example, the more a new technology is perceived as useful and compatible with the user’s past experiences, values, and tasks, the more relevant they will deem it and the more likely they will be to adopt it [61]. However, although perceived persuasiveness may be associated with constructs such as perceived ease of use and perceived usefulness [57,58], perceived compatibility with tasks [59], and social influence [62], it is not synonymous with any of these constructs. For example, the fact that a user perceives an app to be persuasive (motivating) may not mean that they find it easy to use, useful, or compatible with prior experiences, values, and tasks or vice versa. One plausible explanation is that some users may perceive an app (eg, a game) to be persuasive based on hedonic characteristics (such as perceived aesthetics [63] and perceived enjoyment [64]), without considering the utilitarian (eg, perceived usefulness) or compatibility features. In contrast, other users may perceive an app (eg, an exposure notification app) to be persuasive based on utilitarian or compatibility features without paying much attention to hedonic features. In the context of the PSD model, perceived persuasiveness can be viewed as a proxy for the four main categories of persuasive strategies. They include primary task support, dialog support, social support, and credibility support, which have direct and indirect relationships with perceived persuasiveness and adoption intention, respectively [65]. In particular, primary task support (defined as persuasive features that enable users to realize the main goal of a persuasive system) can be compared to perceived usefulness in the TAM. For example, in the work by Lehto et al [65], based on a web-based persuasive health system, primary task support was operationalized using utility-oriented items including (1) the system provides me with means to lose weight, (2) the system helps me lose weight, and (3) the system helps me change my eating habits, which reflect perceived usefulness.

For this study, the perceived persuasiveness measure was adapted from the work by Lehto et al [65], to suit the context of exposure notification apps. It is a 7-point scale ranging from strongly disagree (1) to strongly agree (7). Moreover, willingness to download refers to and measures participants’ intention to adopt the app to curb the spread of the coronavirus after seeing or learning about its functionality. It was based on a yes-or-no measure. Finally, we measured adoption status by asking participants to choose 1 of the 3 options shown in Table 1. If they chose the first and third options, they were regarded as COVID Alert adopters and nonadopters, respectively. Those who chose the second option were filtered out of the data analysis, as we were interested in analyzing and comparing participants who had installed and interacted with the COVID Alert app and those who had not in the past.

Participants

The criterion for inclusion in the study was that participants must be residents of Canada, regardless of sex, gender, age, education, country of origin, and contact tracing app adoption status. We did not place any demographic restrictions on who could participate in the study because everyone, regardless of the enumerated demographic variables, is liable to be exposed to COVID-19, and is thus expected to use exposure notification apps such as COVID Alert. We recruited participants residing in Canada with at least one year of smartphone use experience on Amazon Mechanical Turk to evaluate the persuasive and control designs of the COVID Alert app. Amazon Mechanical Turk is an inexpensive crowdsourcing web-based commercial platform for recruiting a nonconvenience sample of participants worldwide. Research has shown that owing to its quality-assurance mechanism, the platform has the potential to yield high-quality data [66]. The recruitment of study participants took place between December 25, 2020, and January 25, 2021. With the aid of our laboratory-wide account, the first author used the requester interface to post details of the study on the Amazon Mechanical Turk platform. The requester interface allows the researcher to specify the number of participants, duration of the study, and types of participants using filtering terms such as country and location [67]. We tweaked the default JavaScript code in the requester interface to randomly assign 1 of the 6 exposure notification app interfaces to each potential anonymous participant. Hence, each participant only viewed the interface assigned to them as described in Multimedia Appendix 1, without interacting with it. Before completing the web-based questionnaire, each participant was requested to read the information and consent forms and provide informed consent. Upon consent, participants were allowed to complete the survey; otherwise, they were directed to the end of the survey. Each participant was remunerated with US $2 in appreciation of their time.

A total of 204 participants took part in the study. Of these, 65 (32%) had already used the COVID Alert app, 17 (8%) were using other contact tracing apps, 116 (57%) did not use the COVID Alert app or any other contact tracing app at the time of taking the survey, and 6 (3%) did not specify their adoption status. The first and third subgroups were regarded as the COVID Alert adopter group (n=65) and the nonadopter group (n=116), respectively. The second and fourth subgroups (n=23) were filtered out during data analysis. Table 2 shows the demographics of the COVID Alert adopters and nonadopters (n=181) assigned to the 6 user interfaces, comprising 3 control designs (C1, C2, and C3) and 3 persuasive designs (P1, P2, and P3).

Table 2. Participants’ demographics based on the 6 user interfaces (N=181).
Criterion and subgroupOverall users, nNo-exposure interface, nExposure interface, nDiagnosis report interface, n



C1aP1bC2P2C3P3
Gender

Male106162018211912

Female7310121216914

Others2100010
Age(years)

<181010000

18 to 24361671057

25 to 346481012121111

35 to 44489969105

45 to 5419623422

>5510231211

Unspecified3111000
Education

Technical or trade5001211

High school3921149310

Bachelor’s99201418191810

Master’s29346655

Doctorate3110010

Other6121110
Using smartphone (years)

1 to 527362457

6 to 108614161819910

11 to 2059891012137

>208210221

Unspecified1000001
Country of origin

Canada143242124312122

Other383116684
Adoption status

Adopters6510111111139

Nonadopters1161721192616 17

aC: control design.

bP: persuasive design.

Experimental Design and Data Analysis

This study was based on a web-based questionnaire in which each participant was randomly assigned to 1 of the 6 user interfaces shown in Figures 1 and 2. Before questions were asked to the participants, the functionality of the COVID Alert app was described to them (see Multimedia Appendix 1 for details on the experimental design and accompanying information presented to participants). Two types of data analysis were carried out: path modeling and multivariate RM-ANOVA [33]. First, the path modeling set out to uncover the strength of the relationship between the perceived persuasiveness of each of the 3 interfaces (no-exposure status, exposure status, and diagnosis report) and the willingness to download the app by nonadopters. This analysis helped us establish that there is a significant relationship between the perceived persuasiveness of an exposure notification app and the willingness to adopt it by nonadopters.

Second, the experimental design, based on a 4-way multivariate RM-ANOVA factorial design, aimed to understand the main effect of app design, interface, and adoption status on the perceived persuasiveness of each user interface and their interactions. On the basis of this 4-way multivariate RM-ANOVA factorial design, we aimed to understand the main effect of the first 3 variables on the perceived persuasiveness of each of the 3 user interfaces and their interactions. The app design has 2 conditions (persuasive and control), the interface has 3 levels (no-exposure status, exposure status, and diagnosis report), and the adoption status has 2 levels (adopters and nonadopters). Moreover, perceived persuasiveness was measured repeatedly using 3 indicators as shown in Table 1. Finally, among the nonadopter group, we investigated the effect of app design on participants’ willingness to download the COVID Alert app from the app store. Using 2×2 chi-square tests [68], we compared, for each user interface, the percentage of participants who viewed the persuasive design that said “yes” with the percentage of participants who viewed the control design that said "yes". This pairwise comparison helped to uncover any significant difference between the persuasive and control design groups.

Sample Size Calculation

Before conducting this study, we computed the sample size using the University of British Columbia’s web-based power and sample size calculator developed by Brant [69]. We chose the default significance level of .05 and a power level of 0.80. Moreover, we chose our SD value to be 1.0, and the mean difference between the 2 groups as 0.8 on a 7-point Likert scale (ie, >10% difference). The SD was derived from a similar study of the principles of persuasion by Cialdini, conducted among individualist participants from North America [70]. In particular, the SD for the liking principle, which is highly related to the perceived persuasiveness construct in this study, was 1.09. Hence, we decided to use a SD of approximately 1.0 for the calculation of our sample size for each group. The calculation (based on a 2-sided test) resulted in a sample size of 25 for each group. As shown in Table 2, a total of 6 groups met this sample size requirement, with 5 of them being >30.

Research Model and Hypotheses

We based our data analysis on path modeling and multivariate RM-ANOVA. Figure 4 shows the hypothesized model. This model was based on prior research, which showed that there is a significantly strong relationship between the perceived persuasiveness of an app (such as a fitness app) and adoption intentions [57]. On the basis of this finding and the fact that screenshots of key interfaces of an app are often included in its description in the app store, we hypothesized as follows: hypothesis H1: the higher the perceived persuasiveness of an exposure notification app in the app store, the more likely users will download it. This hypothesis is based on the premise that potential users will be able to view the key interfaces of the app (in addition to reading its description) in the app store before making their decision to download it. It is broken down for each of the 3 key user interfaces as follows:

  1. H1a: the higher the perceived persuasiveness of the no-exposure status interface in the app store, the more likely users will download the COVID Alert app.
  2. H1b: the higher the perceived persuasiveness of the exposure status interface in the app store, the more likely users will download the COVID Alert app.
  3. H1c: the higher the perceived persuasiveness of the diagnosis report interface in the app store, the more likely users will download the COVID Alert app.

In addition, using an exploratory approach, we investigated which of the 3 interfaces (ie, perceived persuasiveness) has the strongest effect on users’ willingness to download the COVID Alert app. It is noteworthy that we do not imply or mean a causal-effect relationship in H1 or each time we use the word effect in characterizing the relationship between perceived persuasiveness and willingness to download the app. As the mantra goes, correlation does not mean causation. Moreover, we hypothesized that the perceived persuasiveness of each interface will be influenced by the app design. In other words, given that persuasive designs support persuasive features such as self-monitoring and social learning, we hypothesized as follows:

  1. H2a: the perceived persuasiveness of the persuasive design of the no-exposure status interface will be higher than that of the control design.
  2. H2b: the perceived persuasiveness of the persuasive design of the exposure status interface will be higher than that of the control design.
  3. H2c: the perceived persuasiveness of the persuasive design of the diagnosis report interface will be higher than that of the control design.

Third, research shows that adopters perceive and rate new technologies more favorably than nonadopters [71-73]. For example, Dickerson and Gentry [73] found that prior experience with other computer-related products and services played a significant role in the movement of people toward the purchase of a home computer. Hence, we hypothesized that the perceived persuasiveness of each interface will be influenced by app adoption status. In other words, given that users of COVID Alert (adopters) are familiar with and are currently using it to track their exposure, they are more likely to evaluate it favorably. Hence, we hypothesized as follows:

  1. H3a: adopters are more likely to perceive the no-exposure status interface to be persuasive than nonadopters.
  2. H3b: adopters are more likely to perceive the exposure status interface to be persuasive than nonadopters.
  3. H3c: adopters are more likely to perceive the diagnosis report interface to be persuasive than nonadopters.

Fourth, given the hypothesized relationship between perceived persuasiveness and willingness to download the app (H1), we hypothesized that persuasive versions are more likely to be downloaded by nonadopters than control versions (H4). Some nonadopters, before the completion of the study, might have refused to download the control version of the COVID Alert app in the past for various reasons. However, with the integration of persuasive features such as self-monitoring and social learning, which provide some utilitarian benefit (monitoring of exposure levels) and a socially motivational message, we hypothesized as follows:

  1. H4a: nonadopters who viewed the persuasive design of the no-exposure status interface are more likely to adopt the COVID Alert app than those who viewed the control design.
  2. H4b: nonadopters who viewed the persuasive design of the exposure status interface are more likely to adopt the COVID Alert app than those who viewed the control design.
  3. H4c: nonadopters who viewed the persuasive design of the diagnosis report interface are more likely to adopt the COVID Alert app than those who viewed the control design.
Figure 4. Research model for the relationship between perceived persuasiveness and willingness to download the app by nonadopters. H: hypothesis.
View this figure

Ethics Approval

This study was approved by the University of Waterloo Research Ethics Committee (ORE 42638).


In this section, we present the results based on our hypotheses. The results include the data-driven model, the mean values of perceived persuasiveness for each of the 3 interfaces, the ANOVA to uncover the main effects and interactions of factors, and the percentages of nonadopters who are willing to download the COVID Alert app from the Apple or Google store because of their awareness of it through the survey.

Data-Driven Path Model

Figure 5 shows the data-driven models for the 3 key user interfaces. The models aim to answer the first set of hypotheses (H1a to H1c). They were built using the partial least-squares path modeling package in RStudio [74]. The no-exposure status interface model was built using a subset of the C1 and P1 participants (n=38) who were nonadapters, as shown in Table 2. The other 21 participants did not respond to the question on willingness to download the app. Similarly, the exposure status interface model was built using only the C2 and P2 nonadapters (n=45). Finally, the diagnosis report interface model was built using only the C3 and P3 participants (n=33). As shown in Table 1, one item was used to measure the willingness to download the app, and 3 items were used to measure perceived persuasiveness. In constructing the models, the responses yes and no to willingness to download the app were coded as 1 and 0, respectively. All the construct items were treated as reflective indicators in the measurement models. Unlike formative indicators, which are considered the causes or drivers of the construct (ie, latent variable) that they measure, reflective indicators are considered to be caused by the construct that they measure [75]. Before analyzing the structural models, we evaluated the measurement models to ensure that the required preconditions such as indicator reliability, internal consistency reliability, convergent validity, and discriminant validity of the multiitem construct are satisfied. The outer loading metric was used to measure indicator reliability, which was >0.7 for most of the indicators that measured perceived persuasiveness in the 3 models. However, in the second model, the third indicator (The app design is relevant to my using or continued use of the COVID Alert app) had an outer loading value of 0.64. In the third model, the indicator was removed because its outer loading value was <0.40. The Dillion-Goldstein metric was used to assess the internal consistency reliability of perceived persuasiveness, which was also >0.7. The average variance extracted metric was used to assess the convergent validity of perceived persuasiveness, which was >0.5. Finally, the cross-loading metric was used o assess the discriminant validity of perceived persuasiveness. Its indicators loaded higher on itself than on willingness to download the app [74].

Overall, regardless of the interface, the relationship between perceived persuasiveness and willingness to download an app was statistically significant with β>0.40. We also conducted a multigroup analysis to determine the significant difference between each pair of path coefficients in the 3 submodels. The results showed no significant difference between each pair, although the path coefficients for the no-exposure status interface (β=.68; P<.001) and the exposure status interface (β=.67; P<.001) were numerically higher than those of the diagnosis report interface (β=.47; P=.04).

Figure 5. Data-driven model based on each of the 3 key user interfaces. GOF: goodness of fit. *P<.05; ***P<.001.
View this figure

Mean Values of Perceived Persuasiveness and RM-ANOVA

Overview

In this section, we address the second and third sets of hypotheses (ie, H2 and H3) by conducting a 4-factor multivariate RM-ANOVA based on the interface, app design, adoption status, and perceived persuasiveness. The results of the analysis (Table 3) show a main effect of adoption status (F507,1=28.94; P<.001) and an interaction between interface, adoption status, and app design (F507,2=5.90; P=.002). Owing to the interaction, we carried out a 2-way ANOVA taking each interface, app design, and adoption status at a time.

Table 3. Repeated-measure ANOVA based on interface, adoption status, app design, and perceived persuasiveness.

Adoption statusInterface×adoption status×app design
Df Resa507507
F (df)28.94 (1)5.90 (2)
P value<.001.002

aDf Res: degree of freedom residual.

Two-Way ANOVA for Each Interface

In this section, owing to the 3-way interaction shown in Table 3, we conducted a 2-way ANOVA based on the adoption status and app design for each of the 3 interfaces.

No-Exposure Status Interface

Figure 6 shows the mean ratings of perceived persuasiveness of the no-exposure status interface for adopters and nonadopters. Overall, adopters rated the interface higher than nonadopters. As shown in Table 4, the 2-way ANOVA showed that there was a main effect of adoption status (F173,1=10.82; P=.001) and an interaction between adoption status and app design (F173,1=6.93; P=.009).

Owing to the interaction between adoption status and app design, we carried out a further 1-way ANOVA at each level of adoption status and app design as shown in Table 5. The results showed that there was a main effect of app design within the nonadopter group, with a medium effect size (F112,1=12.34; P<.001; ηp2=0.10). The persuasive design (mean 5.37, SD 1.30) had a significantly higher mean value than the control design (mean 4.57, SD 1.19) did. Moreover, adoption status had a main effect regarding the control design, with a large effect size (F79,1=20.41; P<.001; ηp2=0.21). The adopter group (mean 5.87, SD 1.20) rated the control design significantly higher than the nonadopter group (mean 4.57, SD 1.19).

Figure 6. Mean ratings of perceived persuasiveness of the no-exposure interface for the COVID Alert adopters and nonadopters. Horizontal bar represents overall mean value of perceived persuasiveness. Vertical bars represent 95% CIs. C: control design; P: persuasive design.
View this figure
Table 4. Two-way ANOVA based on adoption status and app design for the no-exposure status interface.

Adoption statusAdoption status×app design
Df Resa173173
F (df)10.82 (1)6.93 (1)
P value.001.009

aDf Res: degree of freedom residual.

Table 5. Further 1-way ANOVA for the perceived persuasiveness of the no-exposure status interface at each level of adoption status and app design (small effect size: ηp2=0.01; medium effect size: ηp2=0.06; larger effect size: ηp2=0.14) [76].

One-way ANOVA for each app designApp design effect

C1aP1b
One-way ANOVA within each adoption status

Adopter5.875.36F61,1=1.05; P=.31

Nonadopter4.575.37F112,1=12.34; P<.001; ηp2=0.10
Adoption effectF79,1=20.41; P<.001; ηp2=0.21F94,1=0.04; P=.84N/Ac

aC: control design.

bP: persuasive design.

cN/A: not applicable.

Exposure Status Interface

Figure 7 shows the mean rating of the perceived persuasiveness of the exposure status interface for adopters and nonadopters. The 2-way ANOVA based on adoption status and app design (Table 6) showed that there was only a main effect of adoption status (F197,1=19.03; P<.001) with a medium effect size (ηp2=0.09). In other words, the adopters significantly rated the perceived persuasiveness of the interface (mean 5.91, SD 1.01) higher than the nonadopters (mean 4.96, SD 1.43).

Figure 7. Mean scores of perceived persuasiveness of the exposure status interface for COVID Alert adopters and nonadopters. Horizontal bar represents the overall mean value of the construct for each user group. Vertical bars represent 95% CIs. C: control design; P: persuasive design.
View this figure
Table 6. Two-way ANOVA based on app design and adoption status for the exposure status interface (small effect size: ηp2=0.01; medium effect size: ηp2=0.06; larger effect size: ηp2=0.14) [76].

Adoption statusAdoption status×app designApp design
Df Resa197197197
F (df)19.03 (1)1.81 (1)0.40 (1)
P value<.0010.180.53

aDf Res: degree of freedom residual.

Diagnosis Report Interface

Figure 8 shows the mean rating of the perceived persuasiveness of the diagnosis report interface for the adopter and nonadopter groups. The 2-way ANOVA based on app design and adoption status (Table 7) showed that there is a main effect of adoption status (F161,1=9.51; P=.002) and an interaction between app design and adoption status (F161,1=4.03; P=.046).

Figure 8. Mean ratings of perceived persuasiveness of the diagnosis report interface for COVID Alert adopters and nonadopters. Horizontal bar represents the overall mean value of the construct for each user group. Vertical bars represent 95% CIs. C: control design; P: persuasive design.
View this figure

Owing to the interaction between adoption status and app design, we carried out a further 1-way ANOVA at each level of each factor as shown in Table 8. The results showed that there was a main effect of app design within the adopter group (F64,1=8.00; P=.006), with a medium effect size (ηp2=0.11). In other words, the persuasive design (mean 6.00, SD 0.97) had a significantly higher mean value for perceived persuasiveness than the control design (mean 5.03, SD 1.22). Moreover, adoption status had a main effect regarding the persuasive design, with a near large effect size (F76,1=11.10; P=.001; ηp2=0.13). In other words, adopters (mean 6.00, SD 0.97) significantly rated the perceived persuasiveness of the persuasive design higher than that of nonadopters (mean 4.61, SD 1.84).

Table 7. Repeated-measure ANOVA based on app design, adoption status, and perceived persuasiveness indicator for the diagnosis report interface.

Adoption statusApp design×adoption status
Df Resa161161
F (df)9.51 (1)4.03 (1)
P value.002.046

aDf Res: degree of freedom residual.

Table 8. Further 1-way ANOVA for the perceived persuasiveness of the diagnosis report interface at each level of adoption status and app design (small effect size: ηp2=0.01; medium effect size: ηp2=0.06; larger effect size: ηp2=0.14) [76].

One-way ANOVA for each app designApp design effect

C3aP3b
One-way ANOVA within each adoption status

Adopter5.036.00F64,1=8.00; P=.006; ηp2=0.11

Nonadopter4.774.61F97,1=0.00; P=.99
Adoption effectF85,1=0.56; P=.46F76,1=11.10; P<.001; ηp2=0.13N/Ac

aC: control design.

bP: persuasive design.

Two-Way ANOVA for Each App Design

In this section, due to the 3-way interaction in Table 8, we conducted a 2-way ANOVA based on adoption status and interface for each of the 3 interfaces.

Control Design

Table 9 presents the 2-way ANOVA based on the adoption status and interface for the control design. The results show a main effect of adoption status (F252,1=20.00; P<.001) and an interaction between adoption status and interface (F252,2=3.45; P=.03).

Owing to the interaction between interface and adoption status, we carried out a further 1-way ANOVA at each level of each factor as shown in Table 10. The results show that there is a main effect of adoption status with regard to the no-exposure status interface (F79,1=20.41; P<.001; ηp2=0.21) and the exposure status interface (F88,1=21.85; P<.001; ηp2=0.20), with a large effect size. In both interfaces, the mean value of perceived persuasiveness was significantly higher for the adopter group than for the nonadopter group. Moreover, there was a main effect of interface within adopters, (F99,2=6.33; P=.003), with a near large effect size (ηp2=0.13), which made us carry out a further pairwise comparison. The results showed that the mean values of perceived persuasiveness for the no-exposure status interface (mean 5.87, SD 1.20) and exposure status interface (mean 6.12, SD 1.01) were significantly higher than those of the diagnosis report interface (mean 5.03, SD 1.22) at P=.04 and P=.003, respectively.

Table 9. Two-way ANOVA based on adoption status and interface for the control design.

Adoption statusInterface×adoption status
Df Resa252252
F (df)20.00 (1)3.45 (2)
P value<.001.03

aDf Res: degree of freedom residual.

Table 10. Further 1-way ANOVA for the perceived persuasiveness of the control design at each level of interface and adoption status (small effect size: ηp2=0.01; medium effect size: ηp2=0.06; larger effect size: ηp2=0.14) [76].

One-way ANOVA for each interfaceInterface effect

No-exposure statusExposure statusDiagnosis report
One-way ANOVA within each adoption status

Adopter5.876.125.03F99,2=6.33; P=.003; ηp2=0.13

Nonadopter4.574.824.78F153,2=0.92; P=.40
Adoption effectF79,1=20.41; P<.001; ηp2=0.21F88,1=21.85; P<.001; ηp2=0.20F85,1=0.56; P=.46N/Aa

aN/A: not applicable.

Persuasive Design

Table 11 shows the 2-way ANOVA based on adoption status and interface for persuasive design. The results show that there is a main effect of adoption status (F279,1=4.96; P=.03; ηp2=0.03), with the mean value of perceived persuasiveness of the persuasive design being significantly higher for the adopter group (mean 5.69, SD 1.24) than for the nonadopter group (mean 5.01, SD 1.54). There is no interaction between adoption status and interface.

Table 11. Two-way ANOVA based on adoption status and interface for the persuasive design (small effect size: ηp2=0.01; medium effect size: ηp2=0.06; larger effect size: ηp2=0.14) [76].

No-exposure statusExposure statusDiagnosis reportOverall
Adopter5.365.706.005.69
Nonadopter5.375.054.615.01
Adoption effectN/AaN/AN/AF279,1=4.96; P=.03; ηp2=0.03

aN/A: not applicable.

Two-Way ANOVA for Each Adoption Status

In this section, owing to the 3-way interaction in Table 3, we conducted a 2-way ANOVA based on app design and interface for each adoption status.

Adopter Group

We performed a 2-way ANOVA based on the app design and interface for the adopter group. The results showed that there was an interaction between app design and interface (F189,2=6.73; P=.001). Owing to the interaction, we carried out a further 1-way ANOVA at each level of app design and interface as shown in Table 12. The results show that there is a main effect of the interface with regard to the control design (F99,2=6.33; P=.003; ηp2=0.13). There was also a main effect of app design with regard to the diagnosis report interface (F64,1=8.00; P=.006; ηp2=0.11). Finally, there is a main effect of app design in the exposure status interface (F64,1=4.31; P=.04; ηp2=0.06). Regarding the diagnosis report interface, the mean of perceived persuasiveness is significantly higher for the persuasive design than the control design. However, the reverse is true for the exposure status interface.

Table 12. Further 1-way ANOVA for adopters’ perceived persuasiveness at each level of app design and interface (small effect size: ηp2=0.01; medium effect size: ηp2=0.06; larger effect size: ηp2=0.14) [76].

One-way ANOVA for each interfaceInterface effect

No-exposure statusExposure statusDiagnosis report
One-way ANOVA within each app design

Control design5.876.125.03F99,2=6.33; P=.002; ηp2=0.13

Persuasive design5.365.706.00F90,2=0.98; P=.38
Adoption effectF61,1=1.05; P=.31F64,1=4.3; P=.04; ηp2=0.06F64,1=8.00; P=.006; ηp2=0.11N/Aa

aN/A: not applicable.

Nonadopter Group

Table 13 shows a 2-way ANOVA based on app design and interface for the nonadopter group. The results showe that there is a main effect of app design (F342,1=5.62; P=.02; ηp2=0.02), with a small effect size and persuasive design (mean 5.01, SD 1.54) having a significantly higher mean value of perceived persuasiveness than the control design (mean 4.72, SD 1.25).

Table 13. Two-way ANOVA based on app design and interface for the nonadopter group (small effect size: ηp2=0.01; medium effect size: ηp2=0.06; larger effect size: ηp2=0.14) [76].

No-exposure statusExposure statusDiagnosis reportOverall
Control design4.564.824.774.72
Persuasive design5.375.054.615.01
App design effectN/AaN/AN/AF342,1=5.62; P=.02; ηp2=0.02

aN/A: not applicable.

Nonadopters’ Willingness to Download the COVID Alert App

This section addresses the fourth set of hypotheses (H4). Figure 9 shows the percentages of nonadopters in each of the 6 groups who were willing to download the COVID Alert app from the Apple or Google store after completing the survey. The question they responded to was Now that I know about the COVID Alert app as the Government of Canada’s official exposure notification app, I will download it from the Apple/Google store to slow down the spread of the coronavirus. This question was targeted only at nonadopters in the survey. Overall, the percentage of nonadopters willing to download the app from the app store was higher for the persuasive design (37/64, 58%) than for the control design (24/52, 46%).

For the no-exposure status interface, the percentage of yes responses was higher for P1 (13/21, 62%) than for C1 (3/17, 18%). Similarly, for the diagnosis report interface, the percentage of yes responses was higher for P3 (12/17, 71%) than for C3 (7/16, 44%). However, for the exposure status interface, the percentage of yes responses was higher for C2 (14/19, 74%) than for P2 (12/26, 46%). To investigate the statistically significant difference between each pair of interface designs (C1 vs P1, C2 vs P2, and C3 vs P3), we carried out a chi-square test as shown in Table 14. Overall, the test showed a significant difference between at least one of the pairs (χ25=88.01; P<.001). Next, for the 6 user interfaces, we carried out a post hoc pairwise chi-square test using the pairwiseNominalIndependence function from the rcompanion package in R, and the Benjamini–Hochberg false discovery rate method of correction for multiple comparison errors [77]. The test showed that the persuasive and control designs for each of the 3 pairs of interfaces were significantly different (P<.001). We also computed the effect size (φ) based on a 2×2 contingency table for each type of interface as shown in Table 14. We used the chisq_to_phi function from the effectsize package [78] to compute the size of the effect of persuasive design on each interface. The result of the computation showed that the effect size of persuasive design for the 3 interfaces is large (φ≥0.50), with that regarding the no-exposure status interface being the highest (φ=1.01).

It is noteworthy that C2 accruing more yes responses (14/19, 74%) than P2 (12/26, 46%), coupled with the nonsignificant difference between the perceived persuasiveness of both interfaces (P=0.53, Table 6) indicates that the nonadopters prefer the control design of the exposure status interface over the persuasive design. Altogether, P1, C2, and P3 are preferred over C1, P2, and C3. Figure 10 shows the overall percentage of yes responses for each set of interfaces, with the former (39/57, 68%) exceeding the latter (22/59, 37%) by >30%.

Figure 9. Percentages of nonadopters willing to download the COVID Alert app. Horizontal bar represents the overall percentage of nonadopters in each app design who were willing to download the app. C: control design; P: persuasive design.
View this figure
Table 14. Chi-square and pairwise comparison tests for nonadopters willing to download the COVID Alert app based on Benjamini–Hochberg false discovery rate method of correction for multiple comparison errors (small effect size: φ=0.1; medium effect size: φ=0.3; larger effect size: φ=0.5) [68,78,79].

No-exposure status interfaceExposure status interfaceDiagnosis report interfaceP valueChi-square (df)

C1aP1bComparisonC2P2ComparisonC3P3Comparison

Willing to download the COVID Alert appP<.001; φ=1.01

P<.001; φ=0.57

P<.001; φ=0.64<.00188.01 (5)

Yes (%)17.6561.90
73.6846.15
43.7570.59



No (%)82.3538.10
26.3253.85
56.2529.41


Difference (%)−64.70+23.80
+47.36−7.70
−12.50+41.48


aC: control design.

bP: persuasive design.

Figure 10. Percentages of nonadopters willing to download the COVID Alert app, with C2 and P2 switched to realize the preferred set of interfaces on the right. Horizontal bar represents the overall percentage of nonadopters in each app design who were willing to download the app. C: control design; P: persuasive design.
View this figure

Principal Findings

In this section, we discuss our findings in the context of our hypotheses. For ease of reference, we summarize the key findings in Table 15. Overall, 83% (10/12) of hypotheses were fully or partially supported by the empirical data and analysis. By partial support, we mean that the hypothesis in question is only supported with regard to one of the adoption groups (adopters or nonadopters) or app designs (persuasive or control). Overall, the study reveals that adopters found the COVID Alert app, regardless of app design and use case, more persuasive than nonadopters (H3a, H3b, and H3c). Second, the study reveals that the persuasive design is more likely to be effective than the control design in motivating nonadopters to adopt exposure notification apps (H2a, H4a, and H4c) and adopters to report their COVID-19 diagnoses (H2c). In other words, our findings suggest that contact tracing apps are more likely to be effective if they are designed as persuasive technologies, particularly by incorporating self-monitoring that helps users track number of daily contacts and duration of exposure, and social learning that motivates users to report their COVID-19 diagnosis through social pressure.

Table 15. Summary of the validation of hypotheses.
Hypothesis (H) numberHypothesisRemark
H1aThe higher the perceived persuasiveness of the no-exposure status interface in the app store, the more likely users will download the COVID Alert app.Supported
H1bThe higher the perceived persuasiveness of the exposure status interface in the app store, the more likely users will download the COVID Alert app.Supported
H1cThe higher the perceived persuasiveness of the diagnosis report interface in the app store, the more likely users will download the COVID Alert app.Supported
H2aThe perceived persuasiveness of the persuasive design of the no-exposure status interface will be higher than that of the control design.Supported among nonadopters only
H2bThe perceived persuasiveness of the persuasive design of the exposure status interface will be higher than that of the control design.Not supported
H2cThe perceived persuasiveness of the persuasive design of the diagnosis report interface will be higher than that of the control design.Supported among adopters only
H3aAdopters are more likely to perceive the no-exposure status interface to be persuasive than nonadopters.Supported overall and particularly regarding the control design
H3bAdopters are more likely to perceive the exposure status interface to be persuasive than nonadopters.Supported overall
H3cAdopters are more likely to perceive the diagnosis report interface to be persuasive than nonadopters.Supported overall and particularly, regarding the persuasive design
H4aNonadopters who viewed the persuasive design of the no-exposure status interface are more likely to adopt the COVID Alert app than those who viewed the control design.Supported
H4bNonadopters who viewed the persuasive design of the exposure status interface are more likely to adopt the COVID Alert app than those who viewed the control design.Not supported: the reverse was the case
H4cNonadopters who viewed the persuasive design of the diagnosis report interface are more likely to adopt the COVID Alert app than those who viewed the control design.Supported

Relationship Between Perceived Persuasiveness and Willingness to Download the COVID Alert App

Our path models supported the first 3 hypotheses. Regarding each user interface, we found that the relationship between perceived persuasiveness and willingness to download the app is significant. The relationship was strongest for the no-exposure status interface (β=.68; P<.001), followed by the exposure status interface (β=.67; P<.001) and the diagnosis report interface (β=.47; P=.04). On the basis of the multigroup analysis, there was no statistically significant difference between each pair of path coefficients. Hence, the first set of hypotheses, the higher the perceived persuasiveness of each interface, the more likely users will download the COVID Alert app (H1a, H1b, and H1c), is supported. This finding is consistent with the finding by Oyibo and Vassileva [57] in the physical activity domain. The authors found that the higher users perceive a fitness app to be persuasive, the higher their intention to use the app to motivate behavior change.

Moreover, the 3 models have an acceptably large goodness of fit (GOF), which shows how well the model fits the data. The GOF for the no-exposure and exposure status interfaces was >60%, and that of the diagnosis report interface was 38%. As stated by Hussain et al [80], a GOF for 36% is regarded as large. Moreover, perceived persuasiveness in the models regarding the no-exposure and exposure status interfaces explains at least 40% of the variance in respondents’ willingness to download the app. However, in the model for the diagnosis report interface, only 20% of the target construct was explained by perceived persuasiveness. More than 60% is regarded as a high explanation of the variance of the target construct and <30% is regarded as a low explanation [74]. Therefore, the variance in willingness to download the app explained for the no-exposure and exposure status interfaces is medium and that for the diagnosis report interface is small. These findings, which correlate with the magnitude and significance of the relationships between perceived persuasiveness and willingness to download the app (Figure 5), indicate that self-monitoring, which the no-exposure and exposure status interfaces support, is more likely to motivate nonadopters to download the app than the diagnosis reporting feature of the app. This finding may not be surprising given that notification of COVID-19 exposure and monitoring of exposure levels tend to benefit the user personally, whereas diagnosis reporting tends to benefit the community. This plausible explanation is reflected in the mean ratings of the perceived persuasiveness of the 2 interfaces by the 2 groups. For the nonadopters, the overall perceived persuasiveness of the user interfaces (Figures 5-7) is numerically higher for the no-exposure status interface (mean 5.01, SD 1.54) and the exposure status interface (mean 4.96, SD 1.43) than for the diagnosis report interface (mean 4.69, SD 1.54). Similarly, for the adopters, the perceived persuasiveness of the control interfaces (Table 10) was significantly higher for the no-exposure status interface (mean 5.87, SD 1.20) and the exposure status interface (mean 6.12, SD 1.01) than for the diagnosis report interface (mean 5.03, SD 1.22).

App Design Effect on Perceived Persuasiveness

In this section, we discuss the effect of app design (persuasive vs control) on the perceived persuasiveness of each of the 3 user interfaces.

No-Exposure Status Interface

Regarding the perceived persuasiveness of the no-exposure status interface, we found an interaction between app design and adoption status (Table 4). Among the adopters, the perceived persuasiveness of the control design and that of the persuasive design did not differ significantly (P=.31, Table 5). However, among nonadopters, the perceived persuasiveness of the persuasive design (mean 5.37, SD 1.30) was significantly higher than that of the control design (mean 4.57, SD 1.19). The effect size of the mean difference between the 2 app designs was medium (ηp2=0.10). Therefore, the fourth hypothesis (H2a), the perceived persuasiveness of the persuasive design of the no-exposure status interface will be higher than that of the control design, is validated for nonadopters. This finding is an indication that although the app design does not matter among adopters, it does matter among nonadopters. This implies that nonadopters are more likely to adopt the persuasive version of the no-exposure status interface (with self-monitoring features) than the control version (without self-monitoring features).

It is noteworthy that, among nonadopters, although demographic variables may confound the validation of H2a, gender is less likely to do so. This is because the gender-based distributions of the nonadopter group that evaluated the control design (C1) and that of the nonadopter group that evaluated the persuasive design (P1) were very similar. As shown in Multimedia Appendix 2, a total of 75% (12/16) of the C1 adopter group were men, and 25% (4/16) were women. Similarly, 71% (15/21) of the P1 adopter group were men, and 29% (6/21) were women. However, the percentage distributions based on age and education for the C1 and P1 nonadopters were different. For example, 24% (5/21) of the P1 nonadopter group) were aged <25 years, whereas 0% (0/15) of the C1 nonadopter group were aged <25 years. Moreover, in the P1 nonadopter group, 25% (5/20) had high school qualification, compared with only 6% (1/17) in the C1 nonadopter group. One plausible explanation for the higher percentage of participants with lower education in the P1 nonadopter group than in the C1 nonadopter group is that the former group had a higher percentage of younger participants aged <25 years. Hence, in future analyses, we hope to investigate the effect of age and education on the significant difference between the P1 and C1 nonadopter groups, which may partly account for the perception of P1 as more persuasive than C1.

Exposure Status Interface

Regarding the perceived persuasiveness of the exposure status interface, we did not find an effect of app design on perceived persuasiveness (Table 6). Hence, the fifth hypothesis (H2b), the perceived persuasiveness of the persuasive design of the exposure status interface will be higher than that of the control design, was not validated. One plausible reason why the persuasive design is not perceived as more persuasive than the control design by either adopters or nonadopters is that the information displayed on the exposure status interface is historical. In other words, the displayed information on the exposure status interface is the total sum of exposure levels over a 14-day period. This cumulative information is less transparent and unlike that of the no-exposure status interface where the displayed exposure level is for each day. Hence, the persuasive version of the no-exposure status interface, which displays daily exposure levels, was perceived as more persuasive than the control version by the nonadopter group as shown in (Table 5.

Diagnosis Report Interface

Regarding the perceived persuasiveness of the diagnosis report interface, we found an interaction between app design and adoption status (Table 7). Among nonadopters, the perceived persuasiveness of the persuasive design and that of the control design did not differ significantly (P=.99, Table 8). However, among adopters, they differed significantly (P=.006). Specifically, adopters perceived the persuasive design (mean 6.00, SD 0.97) to be more persuasive than the control design (mean 5.03, SD 1.22). The effect size of the mean difference between the 2 app designs was medium (ηp2=0.11). Therefore, the sixth hypothesis (H2c), the perceived persuasiveness of the persuasive design of the diagnosis report interface will be higher than that of the control design, is validated for adopters. A plausible explanation for this finding is that having used the control design of the COVID Alert app, the adopters are likely to find the persuasive design, which incorporates social learning, more persuasive. The additional message puts the user under social pressure to follow suit, ie, join other concerned individuals who have reported their diagnosis so that exposed contacts can be notified and take the necessary safety measures to reduce the spread of the virus. The feeling of social pressure to report their COVID-19 diagnosis, fostered by the persuasive design, can be likened to the obligation and social pressure that the adopters must have felt upon the clarion call from the government and public health authorities for mass adoption to flatten the curve. However, for the nonadopters, the socially pressuring message in the persuasive design makes no significant difference compared with the control design (P=.99). One plausible explanation for the nonsignificant difference between both app designs among the nonadopter group is that, compared with adopters, they are less responsive to socially oriented messages, be it from the government, public health authorities, or the app. Hence, we see that the adopters in real life adopted COVID Alert owing to the clarion call from the government and public health authorities, whereas the nonadopters did not.

It is noteworthy that, among adopters, although demographic variables may confound the validation of H2c, gender and education were less likely. This is because the percentage distribution of the adopter group that evaluated the persuasive design (P3) based on gender and education and that of the adopter group that evaluated the control design (C3) look similar (Multimedia Appendix 2). For example, regarding gender, 67% (6/9) of the adopter participants who evaluated C3 were men, and 33% (3/9) were women. The same percentage distribution applies to the adopter participants who evaluated P3: 67% (8/12) were men, and 33% (4/12) were women. Similarly, regarding education, 23% (3/13) of the C3 adopters vs 22% (2/9) of the P3 adopters participants had a high school qualification, 62% (8/13) vs 56% (5/9) had a bachelor’s degree, and 15% (2/13) vs 22% (2/9) had a master’s degree. However, the percentage distributions based on age and smartphone use experience for the C3 and P3 adopter groups were different. For example, 100% (13/13) of the participants in the C3 adopter group were aged <45 years compared with 78% (7/9) in the P3 adopter group. Moreover, 85% (11/13) of the C3 adopter group had >5 years of experience, compared with 100% (8/8) of the P3 adopter group. One plausible explanation for the higher percentage of participants with more years of smartphone use experience in the P3 adopter group than in the C3 adopter group is that the former group had a higher percentage of older participants. Hence, in future analyses, we hope to uncover the effect of age and smartphone use experience on the significant difference between the P3 and C3 adopter groups, which may partly account for the perception of P3 as more persuasive than C3.

Adoption Effect on Perceived Persuasiveness

In this section, we discuss the effect of adoption status (adopter vs nonadopter) on the perceived persuasiveness of each of the 3 user interfaces.

No-Exposure Status Interface

Regarding the perceived persuasiveness of the no-exposure status interface, we found an interaction between the adoption status and app design (Table 4). Regarding persuasive design (Table 5), there was no significant difference between adopters and nonadopters (P=.99). However, regarding the control design, there was an adoption status effect, with adopters (mean 5.87, SD 1.20) perceiving the user interface to be more persuasive than nonadopters (mean 4.57, SD 1.19). The effect size of the mean difference between the adoption statuses was large (ηp2=0.21). Therefore, the seventh hypothesis (H3a), adopters are more likely to perceive the no-exposure status interface to be persuasive than nonadopters, is validated for the control design. A plausible explanation for this finding is that, overall, the COVID Alert adopters are more concerned with the social benefit of using contact tracing apps to curb the spread of the coronavirus than nonadopters. This explains why they are among the early adopters of the app compared with the nonadopters. Hence, it stands to reason that the adopters are more likely to perceive the COVID Alert app that they are currently using to be persuasive than the nonadopters, who are yet to adopt the app.

It is noteworthy that demographic variables such as gender and smartphone use experience may confound the validation of H3a. The reason is that the distribution of the adopter and nonadopter groups that evaluated the control design (C1) based on 3 demographic factors differs one way or the other. As shown in Multimedia Appendix 2, a total of 40% (4/10) of the C1 adopter participants were men, compared with 75% (12/16) of the C1 nonadopter group. Moreover, based on smartphone use experience, we had a higher percentage of participants with lower and higher experience in the C1 nonadopter group than in the C1 adopter group. As shown in Multimedia Appendix 2, a total of 18% (3/17) of the C1 nonadopter group had <6 years of experience and 12% (2/17) had >20 years of experience, compared with 0% (0/10) of both experience levels in the C1 adopter group. Hence, in future analyses, we hope to investigate the effect of gender and smartphone use experience on the significant difference between the C1 adopter and nonadopter groups, which may partly account for the perception of C1 by the former group as more persuasive than the latter group.

Exposure Status Interface

Regarding the exposure status interface, our ANOVA showed that adoption had a main effect (Table 6), with adopters perceiving the interface to be more persuasive (mean 5.91, SD 1.01) than nonadopters (mean 4.96, SD 1.43). The effect size of the mean difference between adoption status was medium (ηp2=0.09). Hence, the eighth hypothesis (H3b), adopters are more likely to perceive the exposure status interface to be persuasive than nonadopters, is validated regardless of the app design. A plausible explanation for this finding is that, compared with the nonadopters, the adopters are more likely to be committed to the social cause of curbing the spread of the coronavirus and thus are more likely to be persuaded to use the COVID Alert app. This explains why they installed the COVID Alert app in the first place and are using it to track their exposure status (at the time of the study).

Diagnosis Report Interface

Regarding the diagnosis report interface (Table 7), we found an interaction between app design and adoption status regarding the perceived persuasiveness of the interface. Regarding the control design (Table 8), there was no significant difference between adopters and nonadopters (P=.46). However, regarding the persuasive design, there is an adoption effect, with adopters (mean 6.00, SD 0.97) perceiving the user interface to be more persuasive than nonadopters (mean 4.61, SD 1.84). The effect size of the mean difference between the 2 groups was near large (ηp2=0.13). Therefore, the ninth hypothesis (H3c), adopters are more likely to perceive the diagnosis report interface to be persuasive than nonadopters, is validated with regard to the persuasive design. A plausible explanation for this finding is that adopters, overall, are more motivated and concerned about the social obligation to curb the spread of the coronavirus using contact tracing apps than the nonadopters, as discussed earlier in Section 5.2 Diagnosis Report Interface. In fact, not only did adopters find the persuasive design significantly more persuasive (mean 6.00, SD 0.97) than nonadopters (mean 4.61, SD 1.84) they also found it more persuasive than the control design (mean 5.03, SD 1.22). However, this is not the case for nonadopters, who did not perceive the persuasiveness of the persuasive design (mean 4.61, SD 1.84) significantly different from that of the control design (mean 4.77, SD 1.21).

It is noteworthy that apart from adoption status, demographic variables such as gender, age, education, and smartphone use experience may partly account for the significant difference between the adopter group and the nonadopter group that evaluated P3 (H3c). For example, as shown in Multimedia Appendix 2, two-thirds of the P3 adopter group were men (6/9, 67%), while one-third were men in the P3 nonadopter group (6/17, 35%). Moreover, 41% (7/17) of the P3 nonadopter group had 1 to 5 years of smartphone use experience, whereas 100% (8/8) of the participants in the P3 adopter group had >5 years of experience. Hence, in future analyses, we hope to investigate the effect of gender, smartphone use experience, and other demographic factors on the significant difference between the P3 adopter and nonadopter groups. The demographic factors may partly account for the perception of P3 by the adopter group as more persuasive than the nonadopter group. Research questions such as (1) Are people more likely to perceive the persuasive interfaces (eg, P3) as persuasive with increase in smartphone use experience (as the percentage distribution in Multimedia Appendix 2 seems to suggest) will be addressed and (2) Are males more likely to perceive the persuasive interfaces (eg, P3) as persuasive than females (as the percentage distribution in Multimedia Appendix 2 seems to suggest) will be addressed.

Adoption Effect on Willingness to Download the COVID Alert App

Among the nonadopters, the chi-square tests regarding willingness to download the COVID Alert app show that there is an effect of user interface. This led us to carry out post hoc pairwise comparisons to uncover the effect of app design. Regarding the no-exposure status interface, the pairwise comparison shows that the size of the effect of the persuasive design is large (Table 14). This indicates that the group that viewed the persuasive design (13/21, 62%) was more willing to download the app than the group that viewed the control design (3/17, 18%). Hence, the tenth hypothesis (H4a), nonadopters who viewed the persuasive design of the no-exposure status interface are more likely to adopt the COVID Alert app than those who viewed the control design, is validated. This finding was replicated with regard to the diagnosis report interface. Those who viewed the persuasive design (12/17, 71%) were more willing to download the app than those who viewed the control design (7/16, 44%). Thus, the twelfth hypothesis (H4c), nonadopters who viewed the persuasive design of the diagnosis report interface are more likely to adopt the COVID Alert app than those who viewed the control design, is validated. The validation of H4a and H4c corroborates the findings in Table 13: among the nonadopter group, the overall perceived persuasiveness of the persuasive designs (mean 5.01, SD 1.54) is significantly higher than that of the control designs (mean 4.72, SD 1.25).

However, although the effect size tests for P1 and P3 showed that the persuasive designs were more likely to be downloaded by the participants than the control designs (C1 and C3), the reverse was true for C2 and P2. The effect size test for the exposure status interface indicated that the 11th hypothesis (H4a), nonadopters who viewed the persuasive design of the exposure status interface (P2) are more likely to adopt the COVID Alert app than those who viewed the control design (C2), was not validated. Specifically, only 46% (12/26) of those who viewed the persuasive design were willing to download the app, compared with 74% (14/19) of those who viewed the control design. This finding is counterintuitive, given that the nonadopters who viewed the other 2 persuasive designs (P1 and P3) were more willing to download the app than those who viewed the control designs (C1 and C3). Although the finding is counterintuitive, it may not be far-fetched given that it aligns with the finding that among adopters (Table 12), the perceived persuasiveness of the control exposure status interface (mean 6.12, SD 1.01) is significantly higher than that of its persuasive version (mean 5.70, SD 1.02). One plausible explanation for this counterintuitive finding is the idea that the app keeps a record of the user’s total number of contacts and exposure minutes within the last 14 days (Figure 2), which, in the context of privacy, users may not like. The historical record displayed by the app may be perceived as individual surveillance [81]. Second, it has the potential to reveal the individual from whom the user contracted the virus if the total number of contacts over the 14-day rolling period was small. This may partly explain the poor performance of the persuasive version of the exposure status interface among adopters and nonadopters. Another plausible explanation for the counterintuitive finding is the relatively high hypothetical statistics presented in the P2 interface, which may be far from reality. In other words, viewing relatively high number of contacts and exposure time within the last 14 days (75 persons and 212 minutes) might have made some of them feel very uncomfortable and even doubtful. The reason for this assertion is that one would have expected the percentage of the P2 group of participants willing to download the app to be much higher given that (1) they could view the cumulative sum of their contacts and exposure minutes, which is an added value and (2) the P1 and P3 groups, who viewed the persuasive designs, were more willing to download the app than the C1 and C3 groups, respectively, who viewed the control designs. In other words, the hypothetical numbers might have been significantly higher than what the P2 group expected in a real-life setting; for example, based on their actual social distancing behavior, such as staying at and working from home. This might have caused cognitive dissonance, thereby making the P2 group doubt the accuracy of the app, which might have negatively affected their willingness to download it. In future work, we will investigate how the number of contacts and exposure time displayed in the exposure status interface influence its perceived persuasiveness and participants’ willingness to download the app.

Moreover, in future work, we will investigate the possible effects of demographic factors such as gender, age, education, and smartphone use experience on the willingness to download the app. This might help explain why the group that viewed the control design of the exposure status interface was more willing to download the app than the group that viewed the persuasive design. However, by merely inspecting the percentage demographic distribution for the C2 and P2 nonadopter groups of participants based on all 4 demographic factors, there seems to be little to no difference between the 2 groups (Multimedia Appendix 2). For example, regarding gender, 53% (10/19) of the C2 nonadopter group compared with 62% (16/26) of the P2 nonadopter group were men. Second, regarding education, 16% (3/19) of the C2 nonadopters vs 23% (6/26) of the P2 nonadopters had a high school qualification, 68% (13/19) vs 54% (14/26) had a bachelor’s degree, and 11% (2/19) vs 19% (5/26) had a master’s degree. The demographic similarities between both groups led us to the question Apart from demographic variables, what else could possibly account for the difference between the C2 and P2 nonadopter groups in terms of their willingness to download the COVID Alert app? The analysis of the qualitative data collected in this study and investigation of the effect of the total exposure levels displayed on the exposure status interface, in future work, can help answer this research question and gain more insights.

Summary of Main Findings

We have shown that exposure notification apps can be designed as persuasive technologies to make them more effective in motivating behavior change. Our results revealed that exposure notification apps are more likely to be adopted and effective if they incorporate persuasive features such as self-monitoring and social learning. Our key findings can be summarized as follows:

  1. Nonadopters find the persuasive design of the no-exposure interface of an exposure notification app to be more persuasive than the control design.
  2. Nonadopters are more willing to download an exposure notification app with a persuasive design of the no-exposure status and diagnosis report interfaces than one with a control design.
  3. Nonadopters are more willing to download an exposure notification app with a control design for the exposure status interface than one with a persuasive design.
  4. Adopters are more likely to be motivated to report their COVID-19 diagnosis by the persuasive design of the diagnosis report interface than by the control design.
  5. Adopters perceive the control design of the no-exposure and exposure status interfaces as more persuasive than the control design of the diagnosis report interface.
  6. Adopters find an exposure notification app more persuasive than nonadopters.
  7. Equipping only the no-exposure status and diagnosis report interfaces with self-monitoring and social learning, respectively, can increase adoption among nonadopters by >30%.

Recommendations and Future Work

On the basis of the overall findings from Figure 9, a total of 58% (37/64 nonadopters) who viewed the persuasive designs were more willing to download the app from the app stores than 46% (24/52 nonadopters) who viewed the control designs. In other words, the percentage of nonadopters willing to download it from app stores increased by >10% owing to the incorporation of persuasive features into the COVID Alert app. More importantly, incorporating persuasive features into the no-exposure status interface and diagnosis report interface only has the potential to increase adoption by >30%. The exposure status interface aside, two-thirds (25/38 nonadopters) who viewed the persuasive designs were willing to download it compared with one-third (10/33 nonadopters) who viewed the control designs. This finding, together with the validation of most of the hypotheses, indicates that overall, the persuasive design of an exposure notification app is more likely to be adopted and effective than the control design. Hence, we recommend that exposure notification app sponsors work toward incorporating persuasive features such as self-monitoring and social learning into future iterations to increase adoption and user experience and make them more effective in curbing the spread of COVID-19. However, because of privacy concerns (the possibility of knowing the person from whom the user contracted the virus), displaying the total number of contacts within the last 14 days of exposure may not be advisable for the exposure status interface. In future studies, this recommendation should be investigated further. Moreover, the potential effectiveness of the other persuasive features identified in our conceptual paper (tailoring, personalization, expertise, trustworthiness, authority, praise, reward, etc) [18] should be investigated as well; for example, how would praising or rewarding the user one way or the other for uploading their one-time COVID-19 diagnosis key influence their continued use of the app or their intention to report their future diagnosis if they test positive again?

Contributions

This study is the first to conduct research of this nature (designing contact tracing apps as persuasive technologies), using an actual exposure notification app currently being used by Canadian residents (COVID Alert app) as proof of concept. In this study, we made several contributions to knowledge regarding the persuasive design of exposure notification apps to make them more effective in curbing the spread of COVID-19. We identified and presented 3 key user interfaces (no-exposure status, exposure status, and diagnosis report). Researchers can adopt these interfaces as a basis for future research on exposure notification apps, not only for the current COVID-19 pandemic but also for other epidemics and pandemics in the future that may require exposure notification apps. Moreover, designers can work toward improving the design of exposure notification apps by incorporating persuasive features, such as self-monitoring and social learning, which we showed to be effective in the no-exposure status interface and diagnosis report interface, respectively. Finally, we showed empirically that the persuasive design of these 2 interfaces has the potential to increase adoption among nonadopters by >30%.

Limitations

This study has limitations. The first limitation is the sample size. We only had an average of 30 participants in each of the 6 groups after data cleaning. Moreover, the participants recruited on the web (ie, on the Amazon Mechanical Turk platform) may not be representative of the entire Canadian population. For example, digital literacy and willingness to download the COVID Alert app may be higher among study participants recruited on the web [82]. This limitation may affect the generalization of the current findings to the entire Canadian population. Hence, there is a need for further research with larger sample sizes that are more representative of the Canadian population. This will help investigate how the current findings can be generalized to a larger Canadian population. Moreover, there is a need for similar research among national populations outside Canada to examine the generalizability of the findings to other countries with similar and different cultures. For example, in future, we hope to conduct a similar study among participants residing in the United States (which has an individualist culture similar to Canada’s) and Nigeria (which has a collectivist culture different from Canada’s). The second limitation of the study is the remuneration of the participants, which may have influenced their responses in some ways. The third limitation is that our findings are based on the Government of Canada’s COVID Alert app, which is only targeted at the Canadian population. Hence, there is a need for further research on country-specific apps among other national populations to investigate how the current findings generalize across different countries and cultures. The fourth limitation of this study is that we did not, in our ANOVA, investigate the main and interaction effects of important demographic variables such as gender, age, education, and smartphone use experience on the findings, although we did discuss their possible effects. The fifth limitation is that we did not investigate the entire range of persuasive strategies available from the PSD model. In addition to self-monitoring and social learning, other persuasive strategies may be instrumental in improving the persuasive design of contact tracing and exposure notification apps, with some being more likely to be effective in motivating certain health behaviors than others. Future work should address these limitations.

Conclusions

Contact tracing and exposure notification apps may continue to be useful for a long time given the endemic potential of COVID-19 [83]. In this paper, we demonstrated that the persuasive design of an exposure notification app is more likely to be effective, using Canada’s COVID Alert as proof of concept. First, we showed that nonadopters, through self-monitoring, prefer to track their daily exposure levels (number of contacts and exposure time) in addition to knowing their exposure status. However, they are not favorable toward knowing the total number of contacts and exposure time after being notified of possible exposure to the virus. This may be due to privacy concerns, which include the possibility of knowing the individual from whom one contracted the virus, if the total number of contacts over the 14-day rolling period is small. Second, we showed that adopters are more likely to be motivated to report their COVID-19 diagnosis using a persuasive design that supports social learning (knowing how many others have reported their diagnosis) than a control design. In summary, this study indicates that equipping the no-exposure status and diagnosis report interfaces of an exposure notification app with self-monitoring and social learning, respectively, can increase the percentage of nonadopters willing to download the app by >30%. In future work, we aim to investigate how demographic variables such as age, gender, and education moderate the effectiveness of persuasive features in exposure notification app design. We also look forward to investigating the relationship between perceived persuasiveness, on one hand, and intentions to install exposure notification apps, self-isolate, and report COVID-19 diagnosis, on the other hand.

Acknowledgments

This project was funded by the Natural Sciences and Engineering Research Council of Canada Discovery Grant (RGPIN-2017-05310) and Cybersecurity and Privacy Institute, University of Waterloo.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Administration of app interfaces to participants.

DOCX File , 29 KB

Multimedia Appendix 2

Demographics of adopters and nonadopters based on gender, age, education, and smartphone use experience.

DOCX File , 82 KB

  1. Tracking COVID-19: contact tracing in the digital age. World Health Organization. 2020 Sep 9.   URL: https:/​/www.​who.int/​news-room/​feature-stories/​detail/​tracking-covid-19-contact-tracing-in-the-digital-age [accessed 2021-01-10]
  2. Braithwaite I, Callender T, Bullock M, Aldridge RW. Automated and partly automated contact tracing: a systematic review to inform the control of COVID-19. Lancet Digit Health 2020 Nov;2(11):e607-e621 [FREE Full text] [CrossRef] [Medline]
  3. COVID-19 Delta Variant: Risk Assessment and Implications for Practice. Ontario Agency for Health Protection and Promotion (Public Health Ontario). 2021 Jul 23.   URL: https:/​/www.​publichealthontario.ca/​-/​media/​documents/​ncov/​voc/​2021/​08/​covid-19-delta-variant-risk-assessment-implications.​pdf?sc_lang=en [accessed 2022-06-04]
  4. The Possibility of COVID-19 after Vaccination: Breakthrough Infections. Centers of Disease Control and Protection. 2021.   URL: https:/​/www.​cdc.gov/​coronavirus/​2019-ncov/​vaccines/​effectiveness/​why-measure-effectiveness/​breakthrough-cases.​html [accessed 2021-09-04]
  5. Silberner J. Now Isn't the Time to Abandon Contact Tracing. Wired. 2021 Aug 26.   URL: https://www.wired.com/story/contact-tracing-delta-variant/ [accessed 2021-09-21]
  6. Akinbi A, Forshaw M, Blinkhorn V. Contact tracing apps for the COVID-19 pandemic: a systematic literature review of challenges and future directions for neo-liberal societies. Health Inf Sci Syst 2021 Apr 13;9(1):18 [FREE Full text] [CrossRef] [Medline]
  7. Osmanlliu E, Rafie E, Bédard S, Paquette J, Gore G, Pomey MP. Considerations for the design and implementation of COVID-19 contact tracing apps: scoping review. JMIR Mhealth Uhealth 2021 Jun 09;9(6):e27102 [FREE Full text] [CrossRef] [Medline]
  8. Sadasivan S. Illustrating with diversity and inclusion for the COVID Alert app. Canada Digital Service. 2020 Nov 26.   URL: https:/​/digital.​canada.ca/​2020/​11/​26/​illustrating-with-diversity-and-inclusion-for-the-covid-alert-app/​ [accessed 2020-12-20]
  9. Kukuk L. Analyzing adoption of COVID-19 contact tracing apps using UTAUT. University of Twente. 2020.   URL: http://essay.utwente.nl/81983/1/Kukuk_BA_EEMCS.pdf [accessed 2022-06-04]
  10. Turnbull S. COVID Alert app nears 3 million users, but only 514 positive test reports. CTV News. 2020 Sep 29.   URL: https:/​/www.​ctvnews.ca/​health/​coronavirus/​covid-alert-app-nears-3-million-users-but-only-514-positive-test-reports-1.​5125256 [accessed 2022-06-04]
  11. O'Neill PH. No, coronavirus apps don’t need 60% adoption to be effective. MIT Technology Review. 2020 Jun 5.   URL: https:/​/www.​technologyreview.com/​2020/​06/​05/​1002775/​covid-apps-effective-at-less-than-60-percent-download/​ [accessed 2021-02-16]
  12. Sharma B. Evaluating the UX of the world’s contact tracing apps. UX Planet. 2020 Oct 3.   URL: https://uxplanet.org/evaluating-the-ux-of-the-worlds-contact-tracing-apps-77187d8c0535 [accessed 2021-11-05]
  13. Privacy review of the COVID Alert exposure notification application. Office of the Privacy Commissioner of Canada. 2020 Jul 31.   URL: https:/​/priv.​gc.ca/​en/​privacy-topics/​health-genetic-and-other-body-information/​health-emergencies/​rev_covid-app/​ [accessed 2021-11-05]
  14. Haggart B. Canada’s COVID Alert app is a case of tech-driven bad policy design. The Conversation. 2020 Aug 13.   URL: https:/​/theconversation.​com/​canadas-covid-alert-app-is-a-case-of-tech-driven-bad-policy-design-144448 [accessed 2021-11-05]
  15. Bahrain, Kuwait and Norway contact tracing apps among most dangerous for privacy. Amnesty International. 2020 Jun 16.   URL: https:/​/www.​amnesty.org/​en/​latest/​news/​2020/​06/​bahrain-kuwait-norway-contact-tracing-apps-danger-for-privacy/​ [accessed 2021-11-05]
  16. Li T, Cobb C, Yang JJ, Baviskar S, Agarwal Y, Li B, et al. What makes people install a COVID-19 contact-tracing app? Understanding the influence of app design and individual difference on contact-tracing app adoption intention. Pervasive Mob Comput 2021 Aug;75(C):101439. [CrossRef]
  17. Cruz MM, Oliveira RS, Beltrão AP, Lopes PH, Viterbo J, Trevisan DG, et al. Assessing the level of acceptance of a crowdsourcing solution to monitor infectious diseases propagation. In: Proceedings of the 2020 IEEE International Smart Cities Conference. 2020 Presented at: ISC2 '20; September 28-October 1, 2020; Piscataway, NJ, USA p. 1-8   URL: https://doi.org/10.1109/ISC251055.2020.9239069 [CrossRef]
  18. Oyibo K, Morita PP. Designing better exposure notification apps: the role of persuasive design. JMIR Public Health Surveill 2021 Nov 16;7(11):e28956 [FREE Full text] [CrossRef] [Medline]
  19. Sharma S, Singh G, Sharma R, Jones P, Kraus S, Dwivedi YK. Digital health innovation: exploring adoption of COVID-19 digital contact tracing apps. IEEE Trans Eng Manage (forthcoming) 2020 Sep 5:1-17 [FREE Full text] [CrossRef]
  20. Walrave M, Waeterloos C, Ponnet K. Ready or not for contact tracing? Investigating the adoption intention of COVID-19 contact-tracing technology using an extended unified theory of acceptance and use of technology model. Cyberpsychol Behav Soc Netw 2021 Jun;24(6):377-383. [CrossRef] [Medline]
  21. Altmann S, Milsom L, Zillessen H, Blasone R, Gerdon F, Bach R, et al. Acceptability of app-based contact tracing for COVID-19: cross-country survey study. JMIR Mhealth Uhealth 2020 Aug 28;8(8):e19857 [FREE Full text] [CrossRef] [Medline]
  22. Oinas-Kukkonen H, Harjumaa M. Persuasive systems design: key issues, process model, and system features. Commun Assoc Inf Syst 2009;24:485-500 [FREE Full text] [CrossRef]
  23. Oyibo K. EMVE-DeCK: a theory-based framework for designing and tailoring persuasive technology. In: Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 2021 Presented at: UMAP '21; June 21-25, 2021; Utrecht, The Netherlands p. 257-267   URL: https://doi.org/10.1145/3450614.3464617 [CrossRef]
  24. Wiafe I. A unified framework for analysing, designing and evaluating persuasive technologies. University of Reading. 2012 Sep.   URL: https:/​/www.​academia.edu/​10613331/​A_Framework_for_Analysing_Designing_and_Evaluating_Persuasive_Technologies [accessed 2022-06-04]
  25. Oyibo K, Yasunaga T, Morita PP. Designing exposure notification applications as persuasive technologies to improve uptake and effectiveness. In: Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care. 2021 Presented at: HFES '21; April 12-16, 2021; Virtual   URL: https://hfeshcs2021.conference-program.com/presentation/?id=INDLEC155&sess=sess102
  26. Federally-backed Covid Alert app now available in Ontario. CTV News. 2020.   URL: https://ottawa.ctvnews.ca/federally-backed-covidalert-app-now-available-in-ontario-1.5046667 [accessed 2022-01-31]
  27. Oyibo K, Orji R, Vassileva J. Investigation of the persuasiveness of social influence in persuasive technology and the effect of age and gender. In: Proceedings of the 2nd International Workshop on Personalization in Persuasive Technology. 2017 Presented at: PPT '17; April 4, 2017; Amsterdam, The Netherlands p. 32-44. [CrossRef]
  28. Oyibo K, Orji R, Vassileva J. The influence of culture in the effect of age and gender on social influence in persuasive technology. In: Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization. 2017 Presented at: UMAP '17; July 9-12, 2017; Bratislava, Slovakia p. 47-52. [CrossRef]
  29. Orji R, Lomotey R, Oyibo K, Orji F, Blustein J, Shahid S. Tracking feels oppressive and 'punishy': exploring the costs and benefits of self-monitoring for health and wellness. Digit Health 2018 Sep 3;4:2055207618797554 [FREE Full text] [CrossRef] [Medline]
  30. Oyibo K, Vassileva J. Persuasive features that drive the adoption of a fitness application and the moderating effect of age and gender. Multimodal Technol Interact 2020 May 11;4(2):17. [CrossRef]
  31. Oyibo K. Investigating the key persuasive features for fitness app design and extending the persuasive system design model: a qualitative approach. Proc Int Symp Human Factors Ergon Health Care 2021 Jul 22;10(1):47-53. [CrossRef]
  32. Orji R, Oyibo K, Lomotey RK, Orji FA. Socially-driven persuasive health intervention design: competition, social comparison, and cooperation. Health Informatics J 2019 Dec;25(4):1451-1484 [FREE Full text] [CrossRef] [Medline]
  33. Hair JF, Tatham RL, Anderson RE, Black W. Multivariate Data Analysis. 5th Edition. Hoboken, NJ, USA: Prentice Hall; 1998.
  34. Oyibo K, Morita PP. COVID alert: factors influencing the adoption of exposure notification apps among Canadian residents. Front Digit Health 2022 Mar 11;4:842661 [FREE Full text] [CrossRef] [Medline]
  35. Oyibo K, Sahu KS, Oetomo A, Morita PP. Factors influencing the adoption of contact tracing applications: protocol for a systematic review. JMIR Res Protoc 2021 Jun 01;10(6):e28961 [FREE Full text] [CrossRef] [Medline]
  36. Hernández-Quevedo C, Scarpetti G, Webb E, Shuftan N, Williams GA, Birk HO, et al. Effective contact tracing and the role of apps: lessons from Europe. Eurohealth 2020;26(2):40-44 [FREE Full text]
  37. Dowthwaite L, Fischer J, Perez Vallejos E, Portillo V, Nichele E, Goulden M, et al. Public adoption of and trust in the NHS COVID-19 contact tracing app in the United Kingdom: quantitative online survey study. J Med Internet Res 2021 Sep 17;23(9):e29085 [FREE Full text] [CrossRef] [Medline]
  38. Oyibo K, Sahu K, Oetomo A, Morita PP. Factors influencing the adoption of contact tracing applications: systematic review and recommendations. Front Digit Health 2022 May 3;4:862466 [FREE Full text] [CrossRef] [Medline]
  39. What is the Definition of Online Privacy? Winston & Strawn LLP. 2020.   URL: https://www.winston.com/en/legal-glossary/online-privacy.html [accessed 2021-11-03]
  40. Wogalter MS, Mayhorn CB. Trusting the Internet: cues affecting perceived credibility. Int J Technol Human Interact 2008;4(1):75-93 [FREE Full text] [CrossRef]
  41. Nemec Zlatolas L, Welzer T, Hölbl M, Heričko M, Kamišalić A. A model of perception of privacy, trust, and self-disclosure on online social networks. Entropy (Basel) 2019 Aug 07;21(8):772 [FREE Full text] [CrossRef] [Medline]
  42. Taddei S, Contena B. Privacy, trust and control: which relationships with online self-disclosure? Comput Human Behav 2013 May;29(3):821-826 [FREE Full text] [CrossRef]
  43. Metzger MJ. Privacy, trust, and disclosure: exploring barriers to electronic commerce. J Comput Mediat Commun 2004 Jul;9(4):JCMC942. [CrossRef]
  44. Riquelme IP, Román S. Is the influence of privacy and security on online trust the same for all type of consumers? Electron Markets 2014 Jan 22;24(2):135-149 [FREE Full text] [CrossRef]
  45. Choon Ling K, Bin Daud D, Hoi Piew T, Keoy KH, Hassan P. Perceived risk, perceived technology, online trust for the online purchase intention in Malaysia. Int J Bus Manag 2011 Jun 01;6(6):167 [FREE Full text] [CrossRef]
  46. Kaspar K. Motivations for social distancing and app use as complementary measures to combat the COVID-19 pandemic: quantitative survey study. J Med Internet Res 2020 Aug 27;22(8):e21613 [FREE Full text] [CrossRef] [Medline]
  47. Velicia-Martin F, Cabrera-Sanchez JP, Gil-Cordero E, Palos-Sanchez PR. Researching COVID-19 tracing app acceptance: incorporating theory from the technological acceptance model. PeerJ Comput Sci 2021 Jan 4;7:e316 [FREE Full text] [CrossRef] [Medline]
  48. Jonker M, de Bekker-Grob E, Veldwijk J, Goossens L, Bour S, Rutten-Van Mölken M. COVID-19 contact tracing apps: predicted uptake in the Netherlands based on a discrete choice experiment. JMIR Mhealth Uhealth 2020 Oct 09;8(10):e20741 [FREE Full text] [CrossRef] [Medline]
  49. Thomas R, Michaleff ZA, Greenwood H, Abukmail E, Glasziou P. Concerns and misconceptions about the Australian government's COVIDSafe app: cross-sectional survey study. JMIR Public Health Surveill 2020 Nov 04;6(4):e23081 [FREE Full text] [CrossRef] [Medline]
  50. Timberg C, Harwell D, Safarpour A. Most Americans are not willing or able to use an app tracking coronavirus infections. That’s a problem for Big Tech’s plan to slow the pandemic. The Washington Post. 2020 Apr 29.   URL: https:/​/www.​washingtonpost.com/​technology/​2020/​04/​29/​most-americans-are-not-willing-or-able-use-an-app-tracking-coronavirus-infections-thats-problem-big-techs-plan-slow-pandemic/​ [accessed 2022-06-04]
  51. van Loon MH. Self-assessment and self-reflection to measure and improve self-regulated learning in the workplace. In: McGrath S, Mulder M, Papier J, Suart R, editors. Handbook of Vocational Education and Training: Developments in the Changing World of Work. Cham, Switzerland: Springer; Jun 12, 2018:1-20.
  52. McLeod S. Albert Bandura's Social Learning Theory. SimplyPsychology. 2016.   URL: https://www.simplypsychology.org/bandura.html [accessed 2021-12-03]
  53. Oyibo K. Designing Culture-Tailored Persuasive Technology to Promote Physical Activity. University of Saskatchewan. 2020.   URL: https://harvest.usask.ca/handle/10388/12943 [accessed 2022-06-04]
  54. Baumeister RF, Gailliot M, DeWall CN, Oaten M. Self-regulation and personality: how interventions increase regulatory success, and how depletion moderates the effects of traits on behavior. J Pers 2006 Dec;74(6):1773-1801. [CrossRef] [Medline]
  55. Bandura A. Social Learning Theory. MarcR Career Professionals.   URL: https:/​/marcr.​net/​marcr-for-career-professionals/​career-theory/​career-theories-and-theorists/​social-learning-theory-bandura/​ [accessed 2021-12-03]
  56. Lyons SD, Berge ZL. Social Learning Theory. In: Seel NM, editor. Encyclopedia of the Sciences of Learning. Boston, MA, USA: Springer; 2012:1-6.
  57. Oyibo K, Vassileva J. HOMEX: persuasive technology acceptance model and the moderating effect of culture. Front Comput Sci 2020 Mar 25;2:10 [FREE Full text] [CrossRef]
  58. Oyibo K, Vassileva J. Relationship between perceived UX design attributes and persuasive features: a case study of fitness app. Information 2021 Sep 07;12(9):365 [FREE Full text] [CrossRef]
  59. Chayinska M, Minescu A, McGarty C. 'The More We Stand For - The More We Fight For': compatibility and legitimacy in the effects of multiple social identities. Front Psychol 2017 Apr 26;8:642 [FREE Full text] [CrossRef] [Medline]
  60. Hubert M, Blut M, Brock C, Zhang RW, Koch V, Riedl R. The influence of acceptance and adoption drivers on smart home usage. Eur J Mark 2019 Jun 10;53(6):1073-1098. [CrossRef]
  61. Taylor S, Todd PA. Understanding information technology usage: a test of competing models. Inf Syst Res 1995 Jun;6(2):144-176. [CrossRef]
  62. Introduction to persuasion. YouTube. 2018 Nov 4.   URL: https://www.youtube.com/watch?v=iufD8CeQpAo [accessed 2021-02-19]
  63. Oyibo K, Afaji I, Orji R, Olabenjo B, Vassileva J. The interplay between classical aesthetics, expressive aesthetics and persuasiveness in behavior modeling. In: Proceedings of the 32nd International BCS Human Computer Interaction Conference. 2018 Presented at: HCI '18; July 4-6, 2018; Belfast, UK p. 1-10. [CrossRef]
  64. Wang H, Lee K. Getting in the flow together: the role of social presence, perceived enjoyment and concentration on sustainable use intention of mobile social network game. Sustainability 2020 Aug 24;12(17):6853 [FREE Full text] [CrossRef]
  65. Lehto T, Oinas-Kukkonen H, Drozd F. Factors affecting perceived persuasiveness of a behavior change support system. In: Proceedings of the 33rd International Conference on Information Systems. 2012 Presented at: ICIS '12; December 16-19, 2012; Orlando, FL, USA p. 1-15.
  66. Buhrmester M, Kwang T, Gosling SD. Amazon's Mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 2011 Jan;6(1):3-5. [CrossRef] [Medline]
  67. Welcome to the Amazon Mechanical Turk Requester User Interface Guide. Amazon Web Services.   URL: https://docs.aws.amazon.com/AWSMechTurk/latest/RequesterUI/Introduction.html [accessed 2021-11-03]
  68. Kim HY. Statistical notes for clinical researchers: Chi-squared test and Fisher's exact test. Restor Dent Endod 2017 May;42(2):152-155 [FREE Full text] [CrossRef] [Medline]
  69. Brant R. Inference for Means: Comparing Two Independent Samples. University of British Columbia. 2012.   URL: http://www.stat.ubc.ca/~rollin/stats/ssize/n2.html [accessed 2021-03-17]
  70. Orji R. Persuasion and culture: individualism–collectivism and susceptibility to influence strategies. In: Proceedings of the 2016 Personalization in Persuasive Technology Workshop. 2016 Presented at: PPT '16; April 5, 2016; Salzburg, Austria.
  71. Al-Jabri IM. The perceptions of adopters and non-adopters of cloud computing: application of technology-organization-environment framework. In: Proceedings of the 14th International Conference of Electronic Business. 2014 Presented at: ICEB '14; December 8-12, 2014; Taipei, Taiwan.
  72. Emani S, Peters E, Desai S, Karson AS, Lipsitz SR, LaRocca R, et al. Who adopts a patient portal?: an application of the diffusion of innovation model. J Innov Health Inform 2018 Oct 25;25(3):149-157 [FREE Full text] [CrossRef] [Medline]
  73. Dickerson MD, Gentry JW. Characteristics of adopters and non-adopters of home computers. J Consum Res 1983 Sep;10(2):225-235 [FREE Full text] [CrossRef]
  74. Sanchez G. PLS Path Modeling with R. Berkley. 2013.   URL: https://www.gastonsanchez.com/PLS_Path_Modeling_with_R.pdf [accessed 2022-06-04]
  75. Hair Jr JF, Hult GT, Ringle CM, Sarstedt M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). 2nd edition. Washington, DC, USA: Sage Publications; 2016.
  76. van den Berg RG. Effect Size – A Quick Guide. SPSS Tutorials.   URL: https://www.spss-tutorials.com/effect-size/ [accessed 2021-09-29]
  77. Mangiafico SS. An R Companion for the Handbook of Biological Statistics. Version 1.3.2. R Companion. 2015.   URL: https://rcompanion.org/rcompanion/a_02.html [accessed 2022-06-04]
  78. Kelley K, Stanley D. Package ‘effectsize’. The Comprehensive R Archive Network. 2021.   URL: https://cran.r-project.org/web/packages/effectsize/effectsize.pdf [accessed 2021-09-23]
  79. Kotrlik J, Williams H, Jabor K. Reporting and interpreting effect size in quantitative agricultural education research. J Agric Educ 2011 Mar 01;52(1):132-142 [FREE Full text] [CrossRef]
  80. Hussain S, Fangwei Z, Siddiqi AF, Ali Z, Shabbir MS. Structural Equation Model for evaluating factors affecting quality of social infrastructure projects. Sustainability 2018 May 03;10(5):1415 [FREE Full text] [CrossRef]
  81. Abrahams N, Cwalina C, Evans M, Flockhart F, Gamvros A, Lennon J. Contact tracing apps in Australia: A new world for data privacy. Norton Rose Fulbright. 2021.   URL: https:/​/www.​nortonrosefulbright.com/​-/​media/​files/​nrf/​nrfweb/​contact-tracing/​australia-contact-tracing.​pdf?revision=9f35a88a-4124-4c48-b38f-68e86a187050&la=en [accessed 2021-01-31]
  82. Abuhammad S, Khabour OF, Alzoubi KH. Covid-19 contact-tracing technology: acceptability and ethical issues of use. Patient Prefer Adherence 2020 Sep 18;14:1639-1647 [FREE Full text] [CrossRef] [Medline]
  83. Saint-Arnaud P. Ottawa spent nearly $20 million on COVID-19 tracking app -- with inconclusive results. CTV News. 2021 Jul 6.   URL: https:/​/www.​ctvnews.ca/​politics/​ottawa-spent-nearly-20-million-on-covid-19-tracking-app-with-inconclusive-results-1.​5497296 [accessed 2022-06-02]


GOF: goodness of fit
PSD: persuasive system design
RM-ANOVA: repeated-measure ANOVA
TAM: Technology Acceptance Model


Edited by A Mavragani; submitted 11.10.21; peer-reviewed by B Marcolin, V Mylonopoulou; comments to author 01.11.21; revised version received 08.12.21; accepted 29.04.22; published 06.09.22

Copyright

©Kiemute Oyibo, Plinio Pelegrini Morita. Originally published in JMIR Formative Research (https://formative.jmir.org), 06.09.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.