Published on in Vol 7 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/38298, first published .
Developing and Evaluating a Measure of the Willingness to Use Pandemic-Related mHealth Tools Using National Probability Samples in the United States: Quantitative Psychometric Analyses and Tests of Sociodemographic Group Differences

Developing and Evaluating a Measure of the Willingness to Use Pandemic-Related mHealth Tools Using National Probability Samples in the United States: Quantitative Psychometric Analyses and Tests of Sociodemographic Group Differences

Developing and Evaluating a Measure of the Willingness to Use Pandemic-Related mHealth Tools Using National Probability Samples in the United States: Quantitative Psychometric Analyses and Tests of Sociodemographic Group Differences

Authors of this article:

Wilson Vincent1 Author Orcid Image

Original Paper

Department of Psychology and Neuroscience, Temple University, Philadelphia, PA, United States

Corresponding Author:

Wilson Vincent, MPH, PhD

Department of Psychology and Neuroscience

Temple University

Weiss Hall

1701 N 13th St

Philadelphia, PA, 19122

United States

Phone: 1 404 200 4193

Email: wvincent1804@gmail.com


Background: There are no psychometrically validated measures of the willingness to engage in public health screening and prevention efforts, particularly mobile health (mHealth)–based tracking, that can be adapted to future crises post–COVID-19.

Objective: The psychometric properties of a novel measure of the willingness to participate in pandemic-related screening and tracking, including the willingness to use pandemic-related mHealth tools, were tested.

Methods: Data were from a cross-sectional, national probability survey deployed in 3 cross-sectional stages several weeks apart to adult residents of the United States (N=6475; stage 1 n=2190, 33.82%; stage 2 n=2238, 34.56%; and stage 3 n=2047, 31.62%) from the AmeriSpeak probability-based research panel covering approximately 97% of the US household population. Five items asked about the willingness to use mHealth tools for COVID-19–related screening and tracking and provide biological specimens for COVID-19 testing.

Results: In the first, exploratory sample, 3 of 5 items loaded onto 1 underlying factor, the willingness to use pandemic-related mHealth tools, based on exploratory factor analysis (EFA). A 2-factor solution, including the 3-item factor, fit the data (root mean square error of approximation [RMSEA]=0.038, comparative fit index [CFI]=1.000, standardized root mean square residual [SRMR]=0.005), and the factor loadings for the 3 items ranged from 0.849 to 0.893. In the second, validation sample, the reliability of the 3-item measure was high (Cronbach α=.90), and 1 underlying factor for the 3 items was confirmed using confirmatory factor analysis (CFA): RMSEA=0, CFI=1.000, SRMR=0 (a saturated model); factor loadings ranged from 1.000 to 0.962. The factor was independently associated with COVID-19–preventive behaviors (eg, “worn a face mask”: r=0.313, SE=0.041, P<.001; “kept a 6-foot distance from those outside my household”: r=0.282, SE=0.050, P<.001) and the willingness to provide biological specimens for COVID-19 testing (ie, swab to cheek or nose: r=0.709, SE=0.017, P<.001; small blood draw: r=0.684, SE=0.019, P<.001). In the third, multiple-group sample, the measure was invariant, or measured the same thing in the same way (ie, difference in CFI [ΔCFI]<0.010 across all grouping categories), across age groups, gender, racial/ethnic groups, education levels, US geographic region, and population density (ie, rural, suburban, urban). When repeated across different samples, factor-analytic findings were essentially the same. Additionally, there were mean differences (ΔM) in the willingness to use mHealth tools across samples, mainly based on race or ethnicity and population density. For example, in SD units, suburban (ΔM=–0.30, SE=0.13, P=.001) and urban (ΔM=–0.42, SE=0.12, P<.001) adults showed less willingness to use mHealth tools than rural adults in the third sample collected on May 30-June 8, 2020, but no differences were detected in the first sample collected on April 20-26, 2020.

Conclusions: Findings showed that the screener is psychometrically valid. It can also be adapted to future public health crises. Racial and ethnic minority adults showed a greater willingness to use mHealth tools than White adults. Rural adults showed more mHealth willingness than suburban and urban adults. Findings have implications for public health screening and tracking and understanding digital health inequities, including lack of uptake.

JMIR Form Res 2023;7:e38298

doi:10.2196/38298

Keywords



Public health responses to COVID-19 and other pandemics (eg, SARS outbreaks, Zika virus disease, swine influenza, the 1918 influenza pandemic) require the public’s strong willingness to participate in preventive and screening efforts [1,2]. As screening advances develop and deploy in response to outbreaks and pandemics, at-home and mobile health (mHealth) methods could ease the burden on testing infrastructure, such as supply chain issues, and limit the public's exposure to pathogens [3,4]. At-home and mHealth strategies successful for HIV screening were deployed in response to COVID-19 [3,5]. However, current events and recent studies have revealed significant variability among the public's participation in screening and preventive measures to address COVID-19, including vaccinations or even mask wearing or social distancing [6,7]. Additionally, the uptake of COVID-19–related mobile apps has been relatively modest worldwide [8], and acceptance of contact-tracing apps was low in most countries [9]. We lack psychometrically validated, rigorously tested measures to help understand people’s willingness to engage in screening and prevention efforts, such as digital tracking, or screen participants for mHealth and pandemic-related research.

A review of the peer-reviewed COVID-19 literature found that mHealth solutions are used for many different aims, including early detection, rapid screening, patient monitoring, and treatment [10]. The most commonly used modalities were mobile phone–based apps and SMS (ie, text messaging) [10]. Most mobile apps in the early months of the pandemic were focused on contact tracing, with other apps focused on symptom monitoring and educational or informational content [11,12]. These included downloadable apps in the Android Play Store and the iOS App Store [11]. Contact-tracing apps, in particular, grew rapidly within the first few months of the COVID-19 pandemic in the United States [8]. A general measure of the willingness to use mHealth tools for public health screening and tracking and information-seeking purposes can help researchers and practitioners identify the correlates, predictors, and outcomes of using such mHealth apps and screen for and evaluate mHealth interventions.

This paper found no published examples of psychometrically validated scales for assessing the willingness to use mHealth-related tools for public health purposes. Studies have typically focused on 1 question or a few separate questions to assess the willingness to use such tools. For example, in an extensive survey of registered National Health Service users in the United Kingdom, participants responded “yes,” “no,” or “unsure” to 1 question asking about their willingness to participate in mHealth app–based contact tracing [13]. The use of 1 question to assess the willingness to use mHealth tools in the study indicates the need for measures to be brief if they attempt to be any more comprehensive.

Other studies have included several items about the intentions or willingness to use mobile apps in general and, alternatively, specific apps that the researchers were evaluating. For example, a previous study used a 3-item measure of the general intention to use medical apps [14]. The items were not specific about the nature of the medical mHealth apps, such as whether they would be used for contact tracing, symptom tracking, or provision of information about an illness [14]. Items that include examples of various uses for medical mHealth apps (eg, contact tracing, symptom tracking, or detection) could help us account for participants’ limits in using these apps; for example, a participant who would use mHealth tools for educational purposes would not necessarily want to use them for contact tracing or symptom tracking. Another study asked participants one question about how willing they would be to install an app for contact tracing and another question about how willing they would be to keep an app that was automatically downloaded to their smartphone [15]. An additional study used a 2-item measure of behavioral intentions to use digital health technologies for COVID-19 as a proxy of actual use, although the authors acknowledged that behavioral intentions and actual use are separate constructs [16]. A different study used a single-item measure of willingness or support for using a specific contact-tracing app being evaluated by the researchers, and participants’ responses were categorized as “app-supporting,” “app-willing,” and “app-reluctant” [17]. Finally, a study asked 3 questions (eg, plan to use the app, hope the app becomes available for use) so that participants could rate their intention to use 2 specific mobile apps that the researchers were evaluating, one for contact tracing and the other for symptom tracking [18]. None of these studies assessed the extent to which the items they used jointly reflected a single underlying willingness to use mHealth tools.

A critical reason a broadly applicable measure of the willingness to use mHealth tools is needed is that it can help researchers better understand the inequities in mHealth uptake and outcomes, including sociodemographic factors related to digital health inequities [19]. There are barriers to mHealth that suggest a digital divide that adversely affects some demographic groups compared to others. For example, although a large survey of registered National Health Service users in the United Kingdom found no differences by age or gender in terms of the willingness to participate in contact tracing via a mobile app [13], other studies have found different results. For instance, older age [17,18] and higher socioeconomic status (ie, lower financial deprivation) [17] have been associated with a greater willingness or intention to use specific mobile apps. Further, studies have found that older patients are less likely to use telehealth, broadly speaking, not just mHealth tools [20], and older patients living in a rural area counted on the United States-Mexico border reported less satisfaction with telehealth compared to younger adults living in the same area [21]. In addition, urban patients have shown greater willingness to use telehealth than rural patients [20]. However, the aforementioned rural patients in the study conducted at the United States-Mexico border showed high levels of satisfaction with telehealth and expressed a willingness to use telehealth in the future despite within-group age differences [21]. Additionally, a small study of patients with comorbid depression and diabetes who receive public health care services in San Francisco found that these patients of lower socioeconomic status had high interest in using digital platforms to manage their health after the onset of the COVID-19 pandemic [22]. However, the authors also found that these patients may require additional human support to help them gain access and get started with being able to use the technology [22]. Approximately a third of these patients reported needing assistance with using their smartphone and installing apps [22].

Researchers and mHealth thought leaders have asserted that it will be helpful to provide and increase access to digital health solutions along with education regarding digital health [19,23]. Additionally, digital health equity should be centered within digital health efforts [19], including those focused on pandemics, such as COVID-19. To these ends, it may be important to measure the willingness to use such technologies to understand what educational, supportive, or motivational efforts are needed to facilitate uptake among populations adversely affected by digital health inequities.

The 2020 COVID Impact Survey (CIS) [24] was a national probability household survey conducted by the National Opinion Research Center (NORC) at the University of Chicago to estimate the impact of COVID-19 on the United States. The CIS used 5 items to assess the willingness to use mHealth tools for COVID-19–related screening and tracking (3 items) and the willingness to test for COVID-19 by providing biological specimens (2 items). Multiple studies have used these CIS items, which could easily be used or adapted for future pandemics or outbreaks. However, studies have used each item individually, without considering them as a single scale. Studies have yet to assess the psychometric properties of the measure in a comprehensive manner.

Given that there are no psychometrically validated measures to screen for the willingness to participate in pandemic-related screening and tracking, including the willingness to use mHealth tools for screening and tracking, this study aims to validate such a measure in a large, national probability sample. Using data from the CIS, the following were assessed: one-dimensionality versus multidimensionality (eg, whether the measure assesses a single construct representing the willingness to participate in any screening and tracking or multiple constructs), convergent and discriminant validity based on its associations with expected correlates of willingness for screening and tracking, measurement invariance (ie, whether the measure assesses the same construct in the same way across sociodemographic or cultural groups, specifically age groups, gender, racial or ethnic groups, education level, geographic regions of the United States in which adults live, and population density of adults’ lived community [ie, rural, suburban, urban]), and mean differences in the underlying factor across demographic and cultural groups.


Data

Data from all 3 cross-sectional stages of the CIS conducted in 2020 were used: N=6475; stage 1 (April 20-26), n=2190, 33.82%; stage 2 (May 4-10), n=2238, 34.56%; and stage 3 (May 30-June 8), n=2047, 31.62%. Given the cross-sectional nature of the data collection, households were not tracked for repeated assessment across the 3 stages. The data were collected using the AmeriSpeak Panel, a probability-based panel implemented by NORC at the University of Chicago, covering approximately 97% of the US household population. The CIS sampled US households with a known, nonzero probability of selection based on the NORC National Sample Frame, which was extracted from the US Postal Service Delivery Sequence File. Households were contacted by US mail, email, telephone, and field interviewers. The data represent noninstitutionalized adults who reside in the United States when weighted using sampling weights provided by the CIS.

Fundamentally, the process of selecting households for the CIS was based on random selection within a sampling frame of oversampling to account for expected, differential rates of survey completion or population coverage for different demographic groups (eg, younger adults, racial and ethnic minority groups) and geographical areas. The prospective households were stratified by geographic region along with age, gender, race or Hispanic ethnicity, and education level. Within these stratifications, households were randomly selected. The differential probabilities of selection and response based on demographic characteristics and geographic region were used to construct the sampling weights that were accounted for in these analyses. The only criteria beyond efforts to adequately represent the population based on the probability of selection and expected survey completion rates were that there was an adult in the household who could complete the survey in English or Spanish either online or via telephone. Detailed reports of all methods of the CIS, including household selection, are available online for the stage 1 [25], stage 2 [26], and stage 3 [27] samples.

Ethical Considerations

The NORC Institutional Review Board reviewed and approved the CIS study protocol for the protection of human subjects’ rights and welfare (FWA00000142). The CIS adhered to all federal and local guidelines and regulations. All subjects who participated in the CIS data collection provided informed consent and were informed that their identities would remain confidential. The original informed consent allows for secondary data analysis, as in this study, provided that the data are deidentified. In fact, the data are deidentified. For example, the data producer, NORC at the University of Chicago, omitted the true stratum and cluster variables from these complex survey data to preserve confidentiality. The data producer found that the cluster variable was negligible, and provided an appropriate pseudostratum variable to be used in place of the true variable. The Temple University Institutional Review Board determined that secondary analyses of deidentified data, such as this study, do not constitute human subject research and, thus, do not require review or approval.

Measures

The following CIS measures in the present analyses were used: (1) willingness to participate in pandemic-related screening and tracking, (2) correlates of the willingness to participate in pandemic-related screening and tracking, and (3) sociodemographic characteristics.

Willingness to Participate in Pandemic-Related Screening and Tracking

Participants responded to questions asking about their likelihood of providing biological specimens for COVID-19–related testing (ie, “testing you for COVID-19 infection using a Q-tip to swab your cheek or nose,” “testing you for immunity or resistance to COVID-19 by drawing a small amount of blood”) and digital screening and tracking (eg, “installing an app on your phone that asks you questions about your own symptoms and provides recommendations about COVID-19,” “installing an app on your phone that tracks your location and sends push notifications if you might have been exposed to COVID-19”; see Tables 1 and 2). Response options ranged from “1. Extremely likely” to “5. “Not at all likely.” Items were reverse-coded such that higher scores reflected a greater perceived likelihood for screening and tracking. Participants had the option to respond with “88. Already done this,” and these cases were excluded from primary analyses using listwise deletion.

Table 1. Demographic characteristics and descriptive statistics at stage 1 (N=2190)a.
VariableDescriptive statistics

Participants, n (%)Estimated proportionLinearized SEDesign effect
Age (years)

18-29282 (12.87)0.2050.0163.21

30-44672 (30.68)0.2540.0131.92

45-59524 (23.93)0.2430.0132.13

≥60712 (32.51)0.2980.0142.07
Gender

Male1036 (47.31)0.4830.0162.25

Female1154 (52.69)0.5170.0162.25
Race/ethnicity

Asian/Asian American48 (2.19)0.0310.0062.93

Black/African American267 (12.19)0.1190.0102.14

Hispanic/Latinx369 (16.85)0.1670.0132.78

White/European American1397 (63.79)0.6280.0162.35

Other races and ethnicities109 (4.98)0.0550.0072.27
Education level

No high school diploma98 (4.47)0.0980.0123.67

High school diploma or equivalent405 (18.49)0.2830.0162.68

Some college893 (40.78)0.2770.0121.67

Bachelor’s degree or above794 (36.26)0.3430.0152.09
Geographical region of the United States

Northeast323 (14.75)0.1740.0101.41

Midwest547 (24.98)0.2070.0091.10

South770 (35.16)0.3800.0121.25

West550 (25.11)0.2380.0101.15
Population density of one’s lived community

Rural155 (7.08)0.0950.0092.21

Suburban406 (18.54)0.2010.0132.14

Urban1624 (74.16)0.7030.0142.14

aPercentages may not sum up to 100 due to rounding. Subcategories may not sum up to 2190 due to missing data.

Table 2. Descriptive statistics at stage 1 (N=2190)a.
VariableDescriptive statistics

Participants, n (%)Estimated proportionLinearized SEDesign effect
Which of the following measures, if any, are you taking in response to the coronavirus? (Answered yes)

Canceled a doctor appointment739 (33.74)0.3240.0152.20

Worn a face mask1713 (78.22)0.7750.0142.37

Visited a doctor or hospital167 (7.63)0.0790.0092.44

Canceled or postponed work activities704 (32.15)0.3240.0152.26

Canceled or postponed school activities448 (20.46)0.2120.0132.33

Canceled or postponed dentist or other appointment819 (37.40)0.3600.0152.15

Canceled outside housekeepers or caregivers221 (10.09)0.1140.0112.70

Avoided some or all restaurants1574 (71.87)0.7160.0152.25

Worked from home758 (34.61)0.3220.0152.11

Studied at home325 (14.84)0.1480.0122.41

Canceled or postponed pleasure, social, or recreational activities1554 (70.96)0.6920.0152.37

Stockpiled food or water765 (34.93)0.3250.0152.14

Avoided public or crowded places1762 (80.46)0.8050.0132.24

Prayed1212 (55.34)0.5600.0162.24

Avoided contact with high-risk people1384 (63.20)0.6210.0162.30

Washed or sanitized hands2037 (93.01)0.9180.0103.15

Kept a 6-foot distance from those outside my household/your household1913 (87.35)0.8550.0122.67

Stayed home because I felt unwell/you felt unwell252 (11.51)0.1060.0102.25

Wiped packages entering my home/your home998 (45.57)0.4530.0162.25
[After COVID-19 began spreading in the United States,] in the past month, how often did you communicate with friends and family by phone, text, email, app, or the internet?

Basically every day1434 (65.48)0.6480.0152.28

A few times a week544 (25.30)0.2440.0142.17

A few times a month133 (6.07)0.0670.0092.92

Once a month45 (2.05)0.0270.0052.45

Not at all19 (0.87)0.0060.0021.31
During a typical month prior to March 1, 2020, when COVID-19 began spreading in the United States, how often did you communicate with friends and family by phone, text, email, app, or the internet?

Basically every day1177 (53.74)0.5410.0162.26

A few times a week740 (33.79)0.3240.0152.25

A few times a month203 (9.27)0.0940.0092.05

Once a month43 (1.96)0.0260.0063.50

Not at all13 (0.59)0.0070.0033.11
Willingness to get digitally tracked and tested for COVID-19 (If these options were available to you, how likely would you be to participate in them?): installing an app on your phone that asks you questions about your own symptoms and provides recommendations about COVID-19

Extremely likely290 (13.24)0.1340.0112.44

Very likely308 (14.06)0.1400.0112.32

Moderately likely437 (19.95)0.2080.0142.52

Not too likely441 (20.14)0.1960.0122.14

Not likely at all678 (30.96)0.3050.0142.08

Already done this25 (1.14)0.0130.0042.57
Willingness to get digitally tracked and tested for COVID-19 (If these options were available to you, how likely would you be to participate in them?): installing an app on your phone that tracks your location and sends push notifications if you might have been exposed to COVID-19

Extremely likely283 (12.92)0.1240.0112.21

Very likely307 (14.02)0.1490.0122.47

Moderately likely472 (21.55)0.2170.0132.28

Not too likely386 (17.63)0.1730.0122.34

Not likely at all718 (32.79)0.3250.0152.15

Already done this15 (0.68)0.0070.0032.98
Willingness to get digitally tracked and tested for COVID-19 (If these options were available to you, how likely would you be to participate in them?):using a website to log your symptoms and location and get recommendations about COVID-19

Extremely likely229 (10.46)0.0970.0102.37

Very likely315 (14.38)0.1520.0122.37

Moderately likely537 (24.52)0.2320.0132.02

Not too likely432 (19.73)0.2070.0142.48

Not likely at all637 (29.09)0.2920.0142.20

Already done this27 (1.23)0.0120.0042.58
Willingness to get digitally tracked and tested for COVID-19 (If these options were available to you, how likely would you be to participate in them?):testing you for COVID-19 infection using a Q-tip to swab your cheek or nose

Extremely likely552 (25.21)0.2270.0132.08

Very likely493 (22.51)0.2480.0152.48

Moderately likely538 (24.57)0.2310.0132.06

Not too likely256 (11.69)0.1360.0122.63

Not likely at all320 (14.61)0.1430.0112.06

Already done this21 (0.96)0.0120.0042.66
Willingness to get digitally tracked and tested for COVID-19 (If these options were available to you, how likely would you be to participate in them?):testing you for immunity or resistance to COVID-19 by drawing a small amount of blood

Extremely likely635 (29.00)0.2570.0142.08

Very likely490 (22.37)0.2460.0142.36

Moderately likely478 (21.83)0.2080.0132.19

Not too likely252 (11.51)0.1320.0122.55

Not likely at all314 (14.34)0.1470.0112.21

Already done this8 (0.37)0.0040.003 3.14

aPercentages may not sum up to 100 due to rounding. Subcategories may not sum up to 2190 due to missing data. As the focus of this study was on willingness, or likelihood, response option 88 was not included in psychometric analyses. Only 8 (0.37%) participants responded with option 88.

Correlates of Willingness to Participate in Pandemic-Related Screening and Tracking

Table 2 details each item that included preventive behaviors conducted in response to COVID-19 for which participants responded to a checklist of items (eg, “worn a face mask,” “worked from home”; “yes” coded 1 and “no” code 0) adapted from the Understanding America Survey [28]. Participants also reported their “frequency of communications with friends and family by phone, text, email, app, or the internet” both (1) in the past month and (2) after COVID-19 began spreading significantly and the public health response began to escalate in the United States in March 2020. Response options range from “1. Basically every day” to “5. Not at all,” and the items were reverse-coded so that higher scores reflected a greater frequency than lower scores. These items were drawn from the Civic Engagement Supplement of the Current Population Survey [29].

Sociodemographic Characteristics

The following demographic characteristics were assessed for measurement invariance: age, gender, and race/ethnicity. Participants reported their current age, which the CIS categorized (ie, 18-19, 30-44, 45-59, and ≥60 years) to preserve anonymity, gender (“female” coded 1, “male” coded 0), self-identified race/ethnicity (eg, Black/African American; Hispanic or Latino; White; multiple other races and ethnicities, such as Asian Indian and Native Hawaiian), and education level (ie, no high school diploma or equivalent, high school diploma or equivalent, some college, bachelor’s degree or greater). Participants also reported the geographical region of the United States in which they lived (ie, the Northeast, the Midwest, the South, the West) and the population density of their lived community (ie, rural, suburban, urban).

Data Analysis Plan

The psychometric properties of 5 items intended to measure the willingness to participate in pandemic-related screening and tracking, including the willingness to use pandemic-related mHealth tools, were evaluated. Descriptive statistics was conducted, and Cronbach α was determined using Stata version 16 [30], and all other psychometric analyses were performed using Mplus version 8 [31]. Model parameters were estimated with these ordinal-scaled items using weighted least squares estimation and Delta parameterization [31]. Model fit was established using 2 of 3 criteria: root mean square error of approximation (RMSEA)≤0.06, comparative fit index (CFI)≥0.95, and standardized root mean square residual (SRMR)≤0.08 [32,33].

Using stage 1 as an exploratory sample, the number of underlying factors assessed by the measure was identified using exploratory factor analysis (EFA). A 1-factor solution was compared against a 2-factor solution, and if the 2-factor solution seemed appropriate, it was compared with a 3-factor solution, and so on. The correct number of factors was determined by evaluating the model fit statistics, eigenvalues (>1), scree plots, and plausibility of emergent factors. Next, using stage 2 as a validation sample, the number of factors was confirmed using CFA. Such measurement models, or latent variables, constructed through CFA account for measurement error. In addition, reliability was assessed using Cronbach α and evaluated convergent and discriminant validity based on associations of the factor(s) with potential correlates of the willingness to participate in pandemic-related screening and tracking. Finally, using stage 3 as a multiple-group, or invariance, sample, the extent to which the measure was invariant across groups by age group was tested (ie, 18-29, 30-44, 45-59, ≥60 years), gender (ie, male, female), race or ethnicity (ie, White, Black, Hispanic, other), education level (ie, high school diploma or equivalent or less, some college, bachelor’s degree or greater), geographical region of the United States, and population density of one’s lived community. For race or ethnicity, the “other” category included 48 (2.19%) non-Hispanic Asians (estimated percentage=3.1%) and 109 (4.98%) individuals of other races or ethnicities (estimated percentage=5.5). The first 2 levels of education, “no high school diploma or equivalent” and “high school diploma or equivalent,” were combined due to the relatively small number of participants (n=98, 4.47%) reporting no high school diploma.

Three levels of measurement invariance were tested: configural, metric, and scalar invariance. A detailed description of conducting measurement invariance analyses with ordinal items is beyond the scope of this paper, although the procedure is briefly described here. First, configural invariance was tested, which is the least strict form of invariance. Specifically, whether each group had the same basic configuration (eg, each group has the same indicators loading onto the same factors in the same direction, positive or negative) was noted. For configural invariance, factor loadings are free to vary across groups, thresholds (ie, the ordinal variable equivalents of intercepts for continuous variables) are free to vary across groups, scale factors are fixed to 1 in all groups, factor means are fixed to 0 for all groups, and factor variances are free to vary across groups [31]. Second, metric invariance was tested, which is the next level of invariance. For metric invariance, factor loadings are constrained to be equal across groups, some thresholds are constrained to be equal across groups, scale factors are fixed to 1 in one group and free to vary in the other groups, and factor means are fixed to 0 in 1 group and free to vary in the other groups [31]. Third, scalar invariance was determined, which indicates strong invariance. For scalar invariance, thresholds are then constrained to be equal across groups. Scalar invariance is the minimum for testing group differences in underlying factor means [34,35]. To compare successive invariance models, a difference in the CFI (ΔCFI)≥0.010 was used to confirm the next level of invariance [34]. Essentially, a lack of worsened model fit with increased constraints indicates measurement invariance.

Given scalar invariance, whether participants differed in the mean levels of the underlying factor by age (reference: age≥60 years), gender (reference: male), race/ethnicity (reference: White), education level (reference: no high school diploma), geographical region of the United States (reference: the Northeast), and population density of one’s lived community (reference: rural) was assessed. To test mean differences, the willingness for screening and tracking, or latent variable, was standardized such that its mean was 0 and SD was 1. The factor mean remained 0 for the reference group and was freely estimated for the other groups. As such, the resulting mean for the nonreference groups reflected the difference in the mean from the reference group in SD units.

To cross-validate the factor analytic findings [36], the factor and invariance analyses (ie, EFA, CFA, measurement invariance testing) and factor mean difference tests in the other stages were repeated. For example, the EFA conducted with the exploratory, stage 1 sample was repeated with the stage 2 and 3 samples.

Given the complex nature of the survey data, the study adjusted for sampling weight, which was the inverse of the probability of selection in the sample. Stratification was also accounted for using pseudostrata based on census tracts; pseudostrata were used to preserve confidentiality. Per the data producer, NORC, cluster variables were not included in the publicly available data sets because of negligible cluster effects (ie, SEs were unaffected). Additionally, excluding these variables further preserved confidentiality. Missing data (up to 13/2190, 0.6%, missing in stage 1; up to 139/2238, 6.2%, missing in stage 2; and up to 45/2047, 2.2%, missing in stage 3 across primary analyses) were handled using listwise deletion, which is typically robust to violations of random missingness and yields appropriate SEs despite a loss of statistical power [37].


Descriptive Statistics and Factor Structure

Table 1 displays the sample characteristics in Stage 1. Table 3 shows the results of the EFA. The highest eigenvalue was 3.721, followed by 0.704, 0.236, 0.179, and 0.161. The eigenvalues, along with a review of a scree plot, supported a 1-factor solution. However, as shown in Table 3, the RMSEA, CFI, and SRMR indicated that the 1-factor solution fit the data poorly. As such, the well-fitting 2-factor solution was selected despite the eigenvalue falling short of 1.0 by just under 30%. As displayed in Table 4, 3 items loaded onto the first factor and 2 loaded onto the second factor. Given that 2-item measures are not reliable [38-40], only the first factor was selected. Based on the themes of the factor items, this factor was named “willingness to use pandemic-related mHealth tools.” The EFA using the stage 2 and 3 samples was repeated, and the results were essentially the same (see Multimedia Appendix 1, Table S1).

Table 3. EFAa and CFAb of items in a measure of the willingness for pandemic-related screening and tracking.
MeasuresEFA (N=2099)CFA (N=2179)

1-Factor solution2-Factor solutionFactor 1 from 2-factor solutionc
Fit index

χ2 (df), P value596.24 (5), <.0014.08 (1), .040 (0), <.001

RMSEAd (CI.90)0.237 (0.221-0.254)0.038 (0.005-0.080)0 (0-0)

CFIe0.9531.0001.000

SRMRf0.1220.0050

aEFA: exploratory factor analysis.

bCFA: confirmatory factor analysis.

cFactor 1 was fully saturated, as it was a latent variable with 3 indicators. Because it was fully saturated, it had perfect model fit. Factor 2 only had 2 items. As such, factor 2 was underidentified and could not be fit to the data as a separate measurement model.

dRMSEA: root mean square error of approximation.

eCFI: comparative fit index.

fSRMR: standardized root mean square residual.

Table 4. Factor loadings.
ItemaEFAb 1-factor solutionEFA 2-factor solutionCFAc factor 1 from 2-factor solutiond

Factor 1Factor 1Factor 2Factor 1 (willingness for digital tracking)
Installing an app on your phone that asks you questions about your own symptoms and provides recommendations about COVID-190.893 (0.009)e1.025 (0.032)e–0.131 (0.041)1.000 (0)e
Installing an app on your phone that tracks your location and sends push notifications if you might have been exposed to COVID-190.859 (0.010)e0.869 (0.011)e0.011 (0.005)0.956 (0.011)e
Using a website to log your symptoms and location and get recommendations about COVID-190.852 (0.010)e0.827 (0.029)e0.065 (0.035)0.962 (0.009)e
Testing you for COVID-19 infection using a Q-tip to swab your cheek or nose0.861 (0.012)e0.098 (0.108)0.831 (0.110)eN/Af
Testing you for immunity or resistance to COVID-19 by drawing a small amount of blood0.849 (0.012)e–0.007 (0.005)0.919 (0.032)eN/A

aFor each item, the question was, “There are some options for testing and tracking people who may have COVID-19 in order to help slow the spread of this virus. If these options were available to you, how likely would you be to participate in them?” Response options were as follows: “1. Extremely likely,” “2. Very likely,” “3. Moderately likely,” “4. Not too likely,” “5. Not likely at all,” and “88. Already done this.” As the focus of this study was on willingness, or likelihood, response option 88 was not included in psychometric analyses. Only 8 (0.37%) participants responded with option 88. Possible scores for each item ranged from 1 to 5 and were reverse-coded so that higher scores indicated greater willingness. Unstandardized factor loadings are presented.

bEFA: exploratory factor analysis.

cCFA: confirmatory factor analysis.

dFactor 1 was fully saturated, as it was a latent variable with 3 indicators. Because it was fully saturated, it had perfect model fit. Factor 2 only had 2 items. As such, factor 2 was underidentified and could not be fit to the data as a separate measurement model.

eFactor loadings load strongly onto the underlying factor.

fN/A: not applicable.

One-Dimensionality and Reliability

Table 3 shows the results of CFA using the stage 2, or validation, sample. As indicated by the factor loadings, a 1-factor structure characterized the 3 items. The RMSEA, CFI, and SRMR showed perfect fit because the model was just-identified, or fully saturated (ie, df=0) with only 3 indicators. In conjunction with EFA, the factor loadings of CFA suggested that the underlying construct was well characterized by the 3 items (Table 4). The CFA was repeated with the stage 1 and 3 samples, and the results were essentially the same (see Multimedia Appendix 1, Table S2). Thus, a one-dimensionality structure was cross-validated across samples in the CIS.

Additionally, the measure showed good reliability in the validation sample (Cronbach α=.90). The measure also showed equivalent reliability in the stage 1 sample (Cronbach α=.90) and the stage 2 sample (Cronbach α=.89).

Convergent and Discriminant Validity

The measure showed convergent and discriminant validity in its correlations and noncorrelations based on the validation sample (see Table 5). Specifically, the underlying factor of the willingness to use pandemic-related mHealth tools was associated with most variables reflecting protective behaviors taken in response to COVID-19 (eg, “worn a face mask,” “avoided public or crowded places”). Although, the willingness to use pandemic-related mHealth tools was not associated with digital communication (ie, communication via phone, text, email, app, or the internet) with friends and family prior to the spread of COVID-19 in the United States in March 2020, the willingness to use pandemic-related mHealth tools was positively associated with digital communication with friends and family after COVID-19 began spreading. Additionally, the variable was positively associated with the 2 items that were dropped from the measure: willingness to be tested for COVID-19 via a swab in the nose or cheek and willingness to be tested for immunity or resistance to COVID-19 via a small blood draw.

Table 5. Construct validity of a measure of the willingness to use mHealth tools for pandemic-related screening and tracking based on correlations with preventive behaviors, digital communication with friends and family before and after the spread of COVID-19 in the United States, and willingness to provide biological specimens for COVID-19–related testing (N=2179).
Behavioral responses to COVID-19raSEP valueRMSEAbCFIcSRMRd
Canceled a doctor appointment0.1800.039<.0010.0091.0000.006
Worn a face mask0.3130.041<.001<0.0011.0000.002
Visited a doctor or hospital0.0270.050.600.0221.0000.013
Canceled or postponed work activities0.2070.039<.001<0.0011.0000.005
Canceled or postponed school activities0.1110.044.010.0101.0000.007
Canceled or postponed dentist or other appointments0.2270.037<.0010.0341.0000.010
Canceled outside housekeepers or caregivers0.1820.050<.0010.0311.0000.014
Avoided some or all restaurants0.3300.038<.001<0.0011.0000.003
Worked from home0.1320.039.0010.0111.0000.007
Studied at home0.0650.048.180.0111.0000.009
Canceled or postponed pleasure, social, or recreational activities0.2580.040<.0010.0131.0000.006
Stockpiled food or water0.2420.039<.0010.0171.0000.007
Avoided public or crowded places0.3290.043<.001<0.0011.0000.005
Prayed0.0700.039.07<0.0011.0000.005
Avoided contact with high-risk people0.2480.038<.001<0.0011.0000.004
Washed or sanitized hands0.2400.54<.001<0.0011.0000.002
Kept a 6-foot distance from those outside my household/your household0.2820.050<.001<0.0011.0000.001
Stayed home because I felt unwell/you felt unwell0.2070.050<.001<0.0011.0000.011
Wiped packages entering my home/your home0.2970.036<.0010.0161.0000.007
After COVID-19 began spreading in the United States in March 2020: frequency of communications with friends and family by phone, text, email, app, or the internet in the past month0.1100.036.002<0.0011.0000.003
Before COVID-19 began to spread in the United States in March 2020: frequency of communications with friends and family by phone, text, email, app, or the internet in a typical month0.0400.035.26<0.0011.0000.001
Willingness to get tested for COVID-19 infection using a Q-tip to swab your cheek or nose0.7090.017<.0010.0440.9990.005
Willingness to get tested for immunity or resistance to COVID-19 by drawing a small amount of blood0.6840.019<.0010.0221.0000.004

aStandardized covariance.

bRMSEA: root mean square error of approximation.

cCFI: comparative fit index.

dSRMR: standardized root mean square residual.

Measurement Invariance

Tests of measurement invariance by age, gender, race/ethnicity, and education level were conducted. The findings are detailed as next.

Measurement Invariance by Age

For age, the configural model had perfect model fit because it was fully saturated (df=0). All factor loadings were significant and in the expected direction for each group. Thus, there was configural invariance by age group. The RMSEA, CFI, and SRMR of the more constrained metric model, which had the same configuration (eg, same pattern of size and direction of factor loadings) of the configural model without being fully saturated, also showed good model fit (see Table 6). The measure showed metric invariance (ΔCFI<0.001) and scalar invariance (ΔCFI=0.001).

Table 6. Measurement invariance by age group, gender, race/ethnicity, education level, geographical region of the United States, and population density of one’s community of residence for a measure of the willingness to use mHealth tools for pandemic-related screening and tracking.
Fit indexInvariance models

ConfiguralMetricScalar
Age groupa (N=2036)

χ2 (df), P value0.00 (0), <.0011.52 (6), .9636.59 (30), .19

RMSEAb (CI.90)0 (0-0)0 (0-0)0.021 (0-0.041)

CFIc1.0001.0001.000

SRMRd00.0020.019
Gendere (N=2036)

χ2 (df), P value0 (0), <.0018.133 (2), .0217.79 (10), .06

RMSEA (CI.90)0 (0-0)0.055 (0.020-0.096)0.028 (0-0.048)

CFI1.0001.0000.999

SRMR00.0060.009
Race/ethnicityf (N=2001)

χ2 (df), P value0 (0), <.0014.42 (6), .6228.14 (30), .56

RMSEA (CI.90)0 (0-0)0 (0-0.049)0 (0-0.031)

CFI1.0001.0001.000

SRMR00.0040.013
Education levelg (N=2036)

χ2 (df), P value0 (0), <.0015.49 (4), .2424.52 (20), .22

RMSEA (CI.90)0 (0-0)0.023 (0-0.066)0.018 (0-0.040)

CFI1.0001.0001.000

SRMR00.0040.013
Geographical region of the United Statesh (N=2036)

χ2 (df), P value0 (0), <.0016.80 (6), .3436.66 (30), .19

RMSEA (CI.90)0 (0-0)0.016 (0-0.062)0.021 (0-0.0410

CFI1.0001.0000.999

SRMR00.0050.016
Population density of one’s lived communityi (N=2036)

χ2 (df), P value0 (0), <.0011.986 (4), .7416.63 (20), .68

RMSEA (CI.90)0 (0-0)0 (0-0.041)0 (0-0.027)

CFI1.0001.0001.000

SRMR00.0020.011

aAge groups were 18-29, 30-44, 45-59, and ≥60 years.

bRMSEA: root mean square error of approximation.

cCFI: comparative fit index.

dSRMR: standardized root mean square residual.

eGender categories are male and female.

fRace categories were Black/African American, Hispanic/Latino, White, and other race/ethnicity.

gEducation categories were high school diploma or equivalent or less, some college, and bachelor’s degree or greater.

hGeographical regions of the United States were the Northeast, the Midwest, the South, and the West.

iThe population density of one’s lived community was represented as rural, suburban, and urban.

Measurement Invariance by Gender

For gender, the fully saturated configural model showed perfect global fit statistics, but the metric model also fit the data adequately based on the RMSEA, CFI, and SRMR (see Table 6). All factor loadings were significant and in the expected direction for each group. Thus, there was configural invariance by gender. Next, the measure showed metric invariance (ΔCFI<0.001) and scalar invariance (ΔCFI=0.001).

Measurement Invariance by Race/Ethnicity

For the race/ethnicity categories, the fully saturated configural model again showed perfect fit. However, the metric model also fit the data well based on the RMSEA, CFI, and SRMR (see Table 6). All factor loadings were significant and in the expected direction for each group. Thus, there was configural invariance by race/ethnicity. Next, the measure showed metric invariance (ΔCFI<0.001) and scalar invariance (ΔCFI<0.001).

Measurement Invariance by Education Level

For the education categories, the fully saturated configural model again showed perfect fit. However, the metric model also fit the data well based on the RMSEA, CFI, and SRMR (see Table 6). All factor loadings were significant and in the expected direction for each group. Thus, there was configural invariance by education level. Next, the measure showed metric invariance (ΔCFI<0.001) and scalar invariance (ΔCFI<0.001).

Measurement Invariance by Geographical Region of the United States

For the geographical regions of the United States, the fully saturated configural model again showed perfect fit. However, the metric model also fit the data well based on the RMSEA, CFI, and SRMR (see Table 6). All factor loadings were significant and in the expected direction for each group. Thus, there was configural invariance by geographical region. Next, the measure showed metric invariance (ΔCFI<0.001) and scalar invariance (ΔCFI<0.001).

Measurement Invariance by Population Density of One’s Lived Community

For the population density of one’s lived community, the fully saturated configural model again showed perfect fit. However, the metric model also fit the data well based on the RMSEA, CFI, and SRMR (see Table 6). All factor loadings were significant and in the expected direction for each group. Thus, there was configural invariance by population density. Next, the measure showed metric invariance (ΔCFI<0.001) and scalar invariance (ΔCFI<0.001).

Measurement Invariance in Stage 1 and 2 Samples

The tests of measurement invariance were repeated across all the groupings in the stage 1 and 2 samples. The measure showed measurement invariance in the same way in the stage 1 sample (Multimedia Appendix 1, Table S3) and the stage 2 sample (Multimedia Appendix 1, Table S4) as it did in the stage 3 sample.

Group Differences in Factor Means

Factor means showed no statistically significant differences by age, gender, education level, or geographical region of the United States, but there were differences by racial/ethnic group and by population density of one’s lived community. Specifically, compared to older adults aged 60 years and more, there were no mean differences in the willingness to use mHealth tools for adults aged 18-29 (ΔM=0.19, SE=0.11, P=.10), 30-44 (ΔM=–0.03, SE=0.09, P=.97), or 45-49 (ΔM=0.06, SE=0.10, P=.56) years. In addition, men and women did not differ (ΔM=0.07, SE=0.06, P=.23). Additionally, compared to adults with a high school diploma, its equivalent, or less, there were no differences in adults with some college education (ΔM=–0.17, SE=0.09, P=.05) and adults with a bachelor’s degree or greater (ΔM=0.14, SE=0.08, P=.09). Adults who lived in the Midwest (ΔM=0.16, SE=0.12, P=.17), the South (ΔM=0.08, SE=0.11, P=.49), and the West (ΔM=0.04, SE=0.11, P=.76) did not differ from adults in the Northeast. However, compared to White Americans, all racial/ethnic minority groups, including Black (ΔM=0.40, SE=0.09, P<.001), Hispanic (ΔM=0.31, SE=0.09, P=.001), or other (ΔM=0.45, SE=0.12, P<.001) Americans showed a greater willingness to use mHealth tools by 31%-45% of a SD unit. Finally, compared to adults who lived in rural areas, adults who lived in suburban (ΔM=–0.30, SE=0.13, P=.001) and urban (ΔM=–0.42, SE=0.12, P<.001) areas showed lower willingness to use mHealth.

These factor mean difference tests were repeated with the stage 1 and 2 samples. For the stage 1 sample, there were several differences compared to the stage 3 sample. Specifically, there were less consistent differences by racial/ethnic group and no differences by the population density of one’s lived communities. Black Americans did not differ from White Americans in the stage 1 sample (ΔM=0.30, SE=0.22, P=.18). Additionally, adults did not differ based on the population density of their lived communities in the stage 1 sample; there were no significant differences between adults who lived in rural areas and adults who lived in suburban (ΔM=0.15, SE=0.21, P=.48) or urban (ΔM=–0.03, SE=0.17, P=.88) areas.

For the stage 2 sample, there were also several differences compared to the stage 3 multiple-group sample. Specifically, in contrast to the stage 3 sample, there were no differences by racial/ethnic group or the population density of one’s lived community; however, there was a difference by education. White Americans did not differ from Black (ΔM=0.15, SE=0.16, P=.36), Hispanic (ΔM=0.20, SE=0.13, P=.11), or other (ΔM=0.26, SE=0.18, P=.15) Americans in the stage 2 sample. Additionally, adults who lived in suburban areas (ΔM=–0.14, SE=0.13, P=.40) and urban (ΔM=–0.16, SE=0.13, P=.22) areas did not differ from adults who lived in rural areas in their willingness to use mHealth. However, adults with at least a college degree showed a greater willingness to use mHealth tools than adults with a high school diploma or less (ΔM=0.22, SE=0.11, P=.04).


Principal Findings

Studies that assess the willingness to use mHealth tools often rely on a single item or a collection of ad hoc questions. Validated scales of the willingness to use mHealth tools are rare, possibly nonexistent. Such a measure could be used in population-based surveys, public health surveillance, selection of appropriate samples for mHealth-based intervention development, or screening of patient populations in clinical settings—particularly in times of major pandemics, such as COVID-19. This study psychometrically evaluated such a measure, originally deployed as part of the CIS national probability household survey. The measure initially included 5 items, 3 related to the willingness to use mHealth tools for pandemic-related screening and tracking and 2 about the willingness to provide salivary, mucosal, or blood samples for pandemic-related testing. Ultimately, a 3-item, 1D measure of the willingness to use pandemic-related mHealth tools emerged from these 5 items. Although the variable reflected by the 3-item measure was highly correlated with participants’ reported willingness to provide biological specimens for testing, the 3 items measure a unique construct distinct from the items about providing biological specimens. Notably, the measure showed invariance across groups by age, gender, race/ethnicity, education level, geographical region of the United States, and population density of one’s lived community, indicating that it measured the same construct in the same way across demographic and cultural groups and geographical representations. The factor analytic psychometric findings were duplicated across all 3 samples, which bolstered the conclusions about the psychometric fitness of the 3-item measure. Thus, the measure can be administered to diverse groups and be used to test differences between groups in their willingness to use mHealth tools.

In the 3-item measure of the willingness to use mHealth tools, 2 items asked about participants’ willingness to download a mobile app and 1 item asked about participants’ willingness to use a website to track symptoms and possible exposures and get recommendations. A prior study evaluating a web browser–based app intended to be compatible across different smartphone operating systems (eg, Android vs iOS) found that many participants would have preferred a native app that would presumably require a download [41,42]. This study suggests that regardless of user preference for a browser-based versus a native app, the items collectively assessed an underlying construct of the willingness to use mHealth tools, broadly.

Higher scores on the measure of the willingness to use mHealth tools were associated in expected ways with other variables, including the variables reflected by the items of the willingness to provide biological specimens for pandemic-related screening and tracking. This empirical link is consistent with the available literature. For example, a US study found that most internet-using participants reported a willingness to use at-home collection methods to provide biological specimens for COVID-19 research [3]. Another US study showed that participants who self-collected biological specimens via throat swabs and dried blood spots during a telehealth session found the procedure acceptable [5]. Telehealth sessions often occur via a mobile app on one’s phone or another portable device.

Additionally, higher scores on the measure of the willingness to use mHealth tools were positively correlated with participants having engaged in COVID-19–preventive behaviors, such as wearing masks or maintaining a 6-foot distance from people outside of one’s household. Thus, the measure tracks with other items that show a willingness to participate in the public health response to stem the pandemic, while still retaining its unique quality as a measure that assesses the willingness to use pandemic-related technological tools.

Participants who scored higher in the willingness to use mHealth tools for pandemic-related screening and tracking communicated more with friends and family via phone, text, email, app, or the internet after COVID-19 began to spread in the United States than participants who scored lower in willing to use mHealth tools. However, the willingness to use mHealth tools for pandemic-related screening and tracking was not associated with communication with friends and family via phone, text, email, app, or the internet before the spread of COVID-19 in the United States. These associations further support an underlying construct of the measure that specifically assesses an adaptation to using digital tools in response to a pandemic.

Participants did not differ in the willingness to use mHealth tools for pandemic-related screening and tracking by age or gender, which is consistent with a previous study that used a 1-item measure of the willingness to participate in contact tracing via a mobile app among users of the National Health Service in the United Kingdom [13]. There were also no differences by education level in this study. In addition, in this analyses, White participants showed less willingness to use mHealth tools than racial/ethnic minority participants. Previous research has shown no consistent findings indicating that White participants are less willing to engaging in COVID-19–preventive behaviors, such as mHealth tools, than other races. However, some variability in demographics may be accounted for by other factors, such as political ideology. For example, in the United States, conservative political ideology or partisanship are associated with a low likelihood of COVID-19–preventive behaviors [43], including mask usage [44] and vaccine trust [45]. Although there is typically quite a bit of heterogeneity within racial and ethnic groups and political parties with respect to political ideology, White Americans lean more toward affiliating with the Republican Party than the Democratic Party, a large majority of African Americans are more likely to affiliate with the Democratic Party than the Republican Party, and Hispanic and Asian Americans lie in between [46,47].

In terms of geographical differences, although there were no detectable differences based on the geographical region of the United States in which participants were located, participants differed based on whether they lived in a rural, suburban, or urban area. Specifically, adults who lived in rural areas showed a greater willingness to use mHealth tools for public health screening and tracking than adults who lived in suburban or urban areas. These findings are consistent with the prior literature on mHealth for rural populations. For example, a recent study of the association between access to mental health counseling and interest in rural telehealth found that although rural residents have less access to mental health counseling and the internet than urban residents, rural residents have more interest in telehealth [48]. Additionally, the less access participants had to mental health counseling, the greater their interest in telehealth [48]. Thus, a lack of access to in-person COVID-19–related testing and services or public health infrastructure to disseminate information in rural areas may coincide with the greater interest in the use of pandemic-related mHealth tools observed in this study. However, there are disparities in telehealth usage, as people who live in communities with limited broadband coverage, such as many rural areas, are less likely to use telehealth [49]. As such, it could be useful to assess willingness separately from actual use.

When tests of mean differences were repeated in the 2 earlier CIS samples, there tended to be fewer group differences in the earlier stage 1 and 2 samples than in the later state 3 sample. For example, in stage 1 of data collection in late April 2020, there were no significant differences by race or ethnicity in the willingness to use mHealth tools. However, by stage 3 of data collection in late May and early June 2020, racial and ethnic minority adults showed a greater willingness to use mHealth tools compared to White adults. Additionally, rural residents did not show a significant difference in the willingness to use pandemic-related mHealth tools compared to suburban or urban residents until the last instance of CIS data collection in late May and early June 2020, several months into the public health response to the COVID-19 pandemic. The differences in interest may have been due to changes in public perceptions of the need for COVID-19–related information and services among adults of color and rural adults as more information emerged about COVID-19.

Even though the CIS was cross-sectional in nature with no longitudinal tracking of specific households, the differences in the statistical significance of group mean differences across stages of data collection likely reflect the changes in the public’s willingness to use mHealth tools over time. Specifically, the CIS was intended to capture cross sections of attitudes and behaviors across the United States in ways that are highly representative of various sociodemographic groups at the time of data collection. Thus, each successive stage of data collection could be interpreted in terms of changes in American attitudes over time. Given how quickly information about COVID-19 and appropriate preventive responses evolved in the early months of the pandemic, extant studies may help to contextualize this study’s findings. For example, in a US study, adults with lower health literacy had greater confidence in the federal government response [50]. Thus, as information about the COVID-19 pandemic rapidly evolved, groups with greater representation of adults with low health literacy might have shown a greater willingness to use mHealth tools. Additionally, a study of Australian adults found that those who viewed themselves as being at intermediate or high risk due to COVID-19 were concerned about having to self-isolate if diagnosed with COVID-19 and those who perceived COVID-19 as a severe condition were more likely to engage in preventive behaviors than adults who did not view themselves as being at risk, have concerns about self-isolation, or perceive COVID-19 as severe [51]. Such concerns could explain variations in group differences in the willingness to use mHealth tools over time for people of color and rural residents in this study after a few months had elapsed early in the pandemic.

The 3-item CIS measure has broad applicability across different use cases. For example, the measure can be used when predicting who would be willing to use mHealth tools before rolling out an mHealth intervention or testing whether those who score higher in the willingness to use mHealth tools give higher usability scores on or engage in more extensive use of a specific app than people who score lower on the willingness to use mHealth tools. Other examples might include whether an intervention to increase the willingness or intention to use mHealth tools is effective or whether using a specific mobile app increases the general willingness to use mHealth tools. Another use case is to determine whether populations that are adversely affected by digital health inequities show different levels of the willingness to use digital health tools than those who are not affected by digital health inequities. For instance, patients with relatively low access to internet-enabled technology or broadband might be willing to use such tools despite low apparent uptake, which has been demonstrated in prior research [21].

Limitations

Although this study has many strengths, there were several notable limitations. Specifically, the study was conducted in cross-sectional stages. Thus, no temporal or causal conclusions can be drawn. Additionally, the timing of data collection for the CIS within the first few months of the public health response to the pandemic may also be a limitation. Specifically, testing was not yet widespread, and although there was a large proliferation of contact-tracing apps within the first few months of the pandemic, the number of apps reached its zenith about 2 months after the final CIS data collection [8]. Thus, there may have been a relatively small number of widely available mHealth tools for symptom assessment and contact tracing at the time of CIS data collection, particularly in the early stages of data collection. Some of the differences in the willingness to use mHealth tools might have varied with increased proliferation of testing and public health tools. Finally, any other psychometrically tested measures of the willingness to use mHealth tools for screening and tracking could not be identified. Thus, there is limited research available against which to compare this study’s measure.

Conclusion

In conclusion, the study findings have research and applied implications. Broadly, more population-level studies are needed to examine the willingness to use mHealth tools in response to public health issues, including pandemics. The measure can facilitate these efforts. Additionally, researchers have argued that the use of mHealth tools should be combined with at-home specimen collection methods to confirm COVID-19 with laboratory analysis, as symptom-based screening alone may be insufficient to serve as a leading indicator of new COVID-19 cases or even determine who should be tested [52]. Additionally, the CIS measure asked about the willingness to download a mobile app voluntarily. However, apps can also be automatically downloaded such that prospective users must opt out. Studies could adapt the CIS measure or test additional items based on whether participants would be willing to keep an automatically downloaded app. However, given that some may oppose digital health measures due to concerns about their rights and privacy [15,16,53-57], assessing the willingness to use automatically downloaded opt-out apps should be assessed separately and compared against the willingness to use apps that require users to download. This study focused on voluntary access to digital health tools, which would likely be the most common scenario, and a prior study did not find marked differences in the willingness to use user-downloaded apps versus automatically downloaded apps [15].

In addition, studies have found that people prefer at-home specimen collection methods over going to a drive-through or clinic [3,4]. Thus, the measure can be used to screen or measure peoples’ willingness to use mHealth tools as part of a broader screening and tracking approach that combines at-home self-collection of biological specimens to control a pandemic or outbreak. The measure of the willingness to use pandemic-related mHealth tools can also be used in mHealth and pandemic-related research to screen participants for low, moderate, or high willingness to use mHealth tools. The measure can also be used in studies to develop interventions to enhance the use of mHealth tools. Additionally, in applied settings, clinicians and other professionals can use the measure as a brief screener to determine, for example, how much of their patient population would be open to using mHealth tools.

Data Availability

The data for this paper were collected by the National Opinion Research Center (NORC) at the University of Chicago and made publicly available by the funder, the Data Foundation. The data can be accessed in Ref. [24].

Conflicts of Interest

None declared.

Multimedia Appendix 1

Supplementary Tables S1-S4.

DOCX File , 37 KB

  1. Taylor M, Raphael B, Barr M, Agho K, Stevens G, Jorm L. Public health measures during an anticipated influenza pandemic: factors influencing willingness to comply. Risk Manag Healthc Policy 2009 Jan;2:9-20 [FREE Full text] [CrossRef] [Medline]
  2. Alley SJ, Stanton R, Browne M, To QG, Khalesi S, Williams SL, et al. As the pandemic progresses, how does willingness to vaccinate against covid-19 evolve? Int J Environ Res Public Health 2021 Jan 19;18(2):2809 [FREE Full text] [CrossRef] [Medline]
  3. Hall EW, Luisi N, Zlotorzynska M, Wilde G, Sullivan P, Sanchez T, et al. Willingness to use home collection methods to provide specimens for SARS-CoV-2/covid-19 research: survey study. J Med Internet Res 2020 Sep 03;22(9):e19471 [FREE Full text] [CrossRef] [Medline]
  4. Siegler A, Hall E, Luisi N, Zlotorzynska M, Wilde G, Sanchez T, et al. Willingness to seek diagnostic testing for SARS-CoV-2 with home, drive-through, and clinic-based specimen collection locations. Open Forum Infect Dis 2020 Jul;7(7):ofaa269 [FREE Full text] [CrossRef] [Medline]
  5. Valentine-Graves M, Hall E, Guest JL, Adam E, Valencia R, Shinn K, et al. At-home self-collection of saliva, oropharyngeal swabs and dried blood spots for SARS-CoV-2 diagnosis and serology: post-collection acceptability of specimen collection process and patient confidence in specimens. PLoS One 2020 Aug 5;15(8):e0236775 [FREE Full text] [CrossRef] [Medline]
  6. Iboi E, Richardson A, Ruffin R, Ingram D, Clark J, Hawkins J, et al. Impact of public health education program on the novel coronavirus outbreak in the United States. Front Public Health 2021 Mar 15;9:630974 [FREE Full text] [CrossRef] [Medline]
  7. Daly M, Robinson E. Willingness to vaccinate against covid-19 in the U.S.: representative longitudinal evidence from April to October 2020. Am J Prev Med 2021;60(6):766-773. [CrossRef]
  8. Osmanlliu E, Rafie E, Bédard S, Paquette J, Gore G, Pomey M. Considerations for the design and implementation of covid-19 contact tracing apps: scoping review. JMIR Mhealth Uhealth 2021 Jun 09;9(6):e27102 [FREE Full text] [CrossRef] [Medline]
  9. Tomczyk S, Barth S, Schmidt S, Muehlan H. Utilizing health behavior change and technology acceptance models to predict the adoption of covid-19 contact tracing apps: cross-sectional survey study. J Med Internet Res 2021 May 19;23(5):e25447 [FREE Full text] [CrossRef] [Medline]
  10. Asadzadeh A, Kalankesh LR. A scope of mobile health solutions in COVID-19 pandemics. Inform Med Unlocked 2021;23:100558 [FREE Full text] [CrossRef] [Medline]
  11. Ming LC, Untong N, Aliudin NA, Osili N, Kifli N, Tan CS, et al. Mobile health apps on covid-19 launched in the early days of the pandemic: content analysis and review. JMIR Mhealth Uhealth 2020 Sep 16;8(9):e19796 [FREE Full text] [CrossRef] [Medline]
  12. John Leon Singh H, Couch D, Yap K. Mobile health apps that help with covid-19 management: scoping review. JMIR Nurs 2020 Aug 6;3(1):e20596 [FREE Full text] [CrossRef] [Medline]
  13. Bachtiger P, Adamson A, Quint JK, Peters NS. Belief of having had unconfirmed covid-19 infection reduces willingness to participate in app-based contact tracing. NPJ Digit Med 2020 Nov 06;3(1):146 [FREE Full text] [CrossRef] [Medline]
  14. Klaver NS, van de Klundert J, van den Broek RJGM, Askari M. Relationship between perceived risks of using mHealth applications and the intention to use them among older adults in the Netherlands: cross-sectional study. JMIR Mhealth Uhealth 2021 Aug 30;9(8):e26845 [FREE Full text] [CrossRef] [Medline]
  15. Altmann S, Milsom L, Zillessen H, Blasone R, Gerdon F, Bach R, et al. Acceptability of app-based contact tracing for covid-19: cross-country survey study. JMIR Mhealth Uhealth 2020 Aug 28;8(8):e19857 [FREE Full text] [CrossRef] [Medline]
  16. Nunes N, Adamo G, Ribeiro M, R Gouveia B, Rubio Gouveia E, Teixeira P, et al. Modeling adoption, security, and privacy of covid-19 apps: findings and recommendations from an empirical study using the unified theory of acceptance and use of technology. JMIR Hum Factors 2022 Sep 14;9(3):e35434 [FREE Full text] [CrossRef] [Medline]
  17. Touzani R, Schultz E, Holmes SM, Vandentorren S, Arwidson P, Guillemin F, et al. Early acceptability of a mobile app for contact tracing during the covid-19 pandemic in France: national web-based survey. JMIR Mhealth Uhealth 2021 Jul 19;9(7):e27768 [FREE Full text] [CrossRef] [Medline]
  18. Jansen-Kosterink S, Hurmuz M, den Ouden M, van Velsen L. Predictors to use mobile apps for monitoring covid-19 symptoms and contact tracing: survey among Dutch citizens. JMIR Form Res 2021 Dec 20;5(12):e28416 [FREE Full text] [CrossRef] [Medline]
  19. Crawford A, Serhal E. Digital health equity and covid-19: the innovation curve cannot reinforce the social gradient of health. J Med Internet Res 2020 Jun 02;22(6):e19361 [FREE Full text] [CrossRef] [Medline]
  20. Jaffe DH, Lee L, Huynh S, Haskell TP. Health inequalities in the use of telehealth in the United States in the lens of covid-19. Popul Health Manag 2020 Oct 01;23(5):368-377. [CrossRef] [Medline]
  21. Phenicie R, Acosta Wright R, Holzberg J. Patient satisfaction with telehealth during covid-19: experience in a rural county on the United States-Mexico border. Telemed J E Health 2021 Aug 01;27(8):859-865. [CrossRef] [Medline]
  22. Hernandez-Ramos R, Aguilera A, Garcia F, Miramontes-Gomez J, Pathak LE, Figueroa CA, et al. Conducting internet-based visits for onboarding populations with limited digital literacy to an mHealth intervention: development of a patient-centered approach. JMIR Form Res 2021 Apr 29;5(4):e25299 [FREE Full text] [CrossRef] [Medline]
  23. Shah SGS, Nogueras D, van Woerden HC, Kiparoglou V. The covid-19 pandemic: a pandemic of lockdown loneliness and the role of digital technology. J Med Internet Res 2020 Nov 05;22(11):e22287 [FREE Full text] [CrossRef] [Medline]
  24. The Data Foundation. COVID Impact Survey.   URL: https://www.covid-impact.org/ [accessed 2023-01-24]
  25. NORC at the University of Chicago. COVID IMpact Survey - Week 1: The Data Foundation: Field Report.   URL: https://elephant-strawberry-7dtj.squarespace.com/s/COVID-Impact_Field-Report_wk3.pdf [accessed 2023-01-24]
  26. NORC at the University of Chicago. COVID IMpact Survey - Week 2: The Data Foundation: Field Report.   URL: https://elephant-strawberry-7dtj.squarespace.com/s/COVID-Impact_Field-Report_wk3.pdf [accessed 2023-01-24]
  27. NORC at the University of Chicago. COVID IMpact Survey - Week 3: The Data Foundation: Field Report.   URL: https://elephant-strawberry-7dtj.squarespace.com/s/COVID-Impact_Field-Report_wk3.pdf [accessed 2023-01-24]
  28. Kapteyn A, Angrisani M, Bennett D, Bruine de Bruin W, Darling J, Gutsche T, et al. Tracking the effect of the covid-19 pandemic on American households. Survey Res Methods 2020;14(2):179-186. [CrossRef]
  29. United States Census Bureau. Current Population Survey Civic Engagement Supplement.   URL: https://catalog.data.gov/dataset/current-population-survey-civic-engagement-supplement [accessed 2023-01-24]
  30. StataCorp. Stata Statistical Software: Release 16. College Station, TX: StataCorp; 2019.
  31. Muthén LK, Muthén BO. Mplus User's Guide: Statistical Analysis with Latent Variables. Los Angeles, CA: Muthén & Muthén; 2017.
  32. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling 1999 Jan;6(1):1-55. [CrossRef]
  33. Bentler PM. On the fit of models to covariances and methodology to the bulletin. Psychol Bull 1992 Nov;112(3):400-404. [CrossRef] [Medline]
  34. Schmitt N, Kuljanin G. Measurement invariance: review of practice and implications. Hum Resource Manag Rev 2008 Dec;18(4):210-222. [CrossRef]
  35. Putnick DL, Bornstein MH. Measurement invariance conventions and reporting: the state of the art and future directions for psychological research. Dev Rev 2016 Sep;41:71-90 [FREE Full text] [CrossRef] [Medline]
  36. Gerbing DW, Hamilton JG. Viability of exploratory factor analysis as a precursor to confirmatory factor analysis. Struct Equ Modeling 1996 Jan;3(1):62-72. [CrossRef]
  37. Allison PD. Missing Data. Volume 136. Thousand Oaks, CA: SAGE; 2001.
  38. Emons WHM, Sijtsma K, Meijer RR. On the consistency of individual classification using short scales. Psychol Methods 2007;12(1):105-120. [CrossRef]
  39. Little TD, Lindenberger U, Nesselroade JR. On selecting indicators for multivariate measurement and modeling with latent variables: when "good" indicators are bad and "bad" indicators are good. Psychol Methods 1999 Jun;4(2):192-211. [CrossRef]
  40. Marsh HW, Hau K, Balla JR, Grayson D. Is more ever too much? The number of indicators per factor in confirmatory factor analysis. Multivar Behav Res 1998 Apr;33(2):181-220. [CrossRef]
  41. Scherr TF, DeSousa JM, Moore CP, Hardcastle A, Wright DW. App use and usability of a barcode-based digital platform to augment covid-19 contact tracing: postpilot survey and paradata analysis. JMIR Public Health Surveill 2021 Mar 26;7(3):e25859 [FREE Full text] [CrossRef] [Medline]
  42. Scherr TF, Hardcastle AN, Moore CP, DeSousa JM, Wright DW. Understanding on-campus interactions with a semiautomated, barcode-based platform to augment covid-19 contact tracing: app development and usage. JMIR Mhealth Uhealth 2021 Mar 26;9(3):e24275 [FREE Full text] [CrossRef] [Medline]
  43. Gadarian SK, Goodman SW, Pepinsky TB. Partisanship, health behavior, and policy attitudes in the early stages of the covid-19 pandemic. PLoS One 2021 Apr 7;16(4):e0249596 [FREE Full text] [CrossRef] [Medline]
  44. Gonzalez KE, James R, Bjorklund ET, Hill TD. Conservatism and infrequent mask usage: a study of US counties during the novel coronavirus (COVID-19) pandemic. Soc Sci Q 2021 Sep 27;102(5):2368-2382 [FREE Full text] [CrossRef] [Medline]
  45. Latkin CA, Dayton L, Yi G, Konstantopoulos A, Boodram B. Trust in a covid-19 vaccine in the U.S.: a social-ecological perspective. Soc Sci Med 2021 Feb;270:113684 [FREE Full text] [CrossRef] [Medline]
  46. Pew Research Center. Trends in Party Affiliation among Demographic Groups.   URL: https:/​/www.​pewresearch.org/​politics/​2018/​03/​20/​1-trends-in-party-affiliation-among-demographic-groups/​ [accessed 2022-03-26]
  47. Robilllard K. What the Census Tells Us about American Politics.   URL: https://www.huffpost.com/entry/2020-census-politics-redistricting-senate_n_6115632be4b07b9118a8bf8f [accessed 2022-03-26]
  48. Weinzimmer LG, Dalstrom MD, Klein CJ, Foulger R, de Ramirez SS. The relationship between access to mental health counseling and interest in rural telehealth. J Rural Mental Health 2021 Jul;45(3):219-228. [CrossRef]
  49. Zhang D, Shi L, Han X, Li Y, Jalajel NA, Patel S, et al. Disparities in telehealth utilization during the COVID-19 pandemic: findings from a nationally representative survey in the United States. J Telemed Telecare 2021 Oct 11:1357633X2110516. [CrossRef]
  50. Wolf MS, Serper M, Opsasnick L, O'Conor RM, Curtis L, Benavente JY, et al. Awareness, attitudes, and actions related to covid-19 among adults with chronic conditions at the onset of the U.S. outbreak. Ann Intern Med 2020 Jul 21;173(2):100-109. [CrossRef]
  51. Seale H, Heywood AE, Leask J, Sheel M, Thomas S, Durrheim DN, et al. COVID-19 is rapidly changing: examining public perceptions and behaviors in response to this evolving pandemic. PLoS One 2020 Jun 23;15(6):e0235112 [FREE Full text] [CrossRef] [Medline]
  52. Callahan A, Steinberg E, Fries JA, Gombar S, Patel B, Corbin CK, et al. Estimating the efficacy of symptom-based screening for covid-19. NPJ Digit Med 2020 Jul 13;3(1):95 [FREE Full text] [CrossRef] [Medline]
  53. Abeler J, Bäcker M, Buermeyer U, Zillessen H. COVID-19 contact tracing and data protection can go together. JMIR Mhealth Uhealth 2020 Apr 20;8(4):e19359 [FREE Full text] [CrossRef] [Medline]
  54. Cuan-Baltazar JY, Muñoz-Perez MJ, Robledo-Vega C, Pérez-Zepeda MF, Soto-Vega E. Misinformation of covid-19 on the internet: infodemiology study. JMIR Public Health Surveill 2020 Apr 09;6(2):e18444 [FREE Full text] [CrossRef] [Medline]
  55. Ahmed W, Vidal-Alaball J, Downing J, López Seguí F. COVID-19 and the 5G conspiracy theory: social network analysis of Twitter data. J Med Internet Res 2020 May 06;22(5):e19458 [FREE Full text] [CrossRef] [Medline]
  56. Abd-Alrazaq A, Alhuwail D, Househ M, Hamdi M, Shah Z. Top concerns of tweeters during the covid-19 pandemic: infoveillance study. J Med Internet Res 2020 Apr 21;22(4):e19016 [FREE Full text] [CrossRef] [Medline]
  57. Gisondi MA, Barber R, Faust JS, Raja A, Strehlow MC, Westafer LM, et al. A deadly infodemic: social media and the power of covid-19 misinformation. J Med Internet Res 2022 Feb 01;24(2):e35552 [FREE Full text] [CrossRef] [Medline]


CFA: confirmatory factor analysis
CFI: comparative fit index
CIS: COVID Impact Survey
EFA: exploratory factor analysis
mHealth: mobile health
NORC: National Opinion Research Center
RMSEA: root mean square error of approximation
SRMR: standardized root mean square residual


Edited by A Mavragani; submitted 27.03.22; peer-reviewed by J Seitz, T Scherr, Z Li; comments to author 16.12.22; revised version received 26.12.22; accepted 04.01.23; published 07.02.23

Copyright

©Wilson Vincent. Originally published in JMIR Formative Research (https://formative.jmir.org), 07.02.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.