This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
Smartphone app–based ecological momentary assessment (EMA) without face-to-face contact between researcher and participant (
This study aims to assess the feasibility of app-based noncontact EMA as a function of previous EMA experience, by recruiting and comparing a group of participants who had never participated in EMA before against a group of participants who had been part of an earlier in-person EMA study, and age, by recruiting middle-aged to older adults.
Overall, 151 potential participants were invited via email; 46.4% (70/151) enrolled in the study by completing the baseline questionnaire set and were emailed instructions for the EMA phase. Of these participants, 67% (47/70) downloaded an EMA app and ran the survey sequence for 1 week. In total, 5 daytime surveys and 1 evening survey, each day, assessed participants’ listening environment, social activity, and conversational engagement. A semistructured
Enrollment rates among invitees (63.3% vs 38.2%;
Smartphone app–based noncontact EMA appears to be feasible, although participants with previous EMA experience, younger participants, and iOS users performed better on certain markers of feasibility. Measures to increase feasibility may include extensive testing of the app with different phone types, encouraging participants to seek timely assistance for any issues experienced, and recruiting participants who have some previous EMA experience where possible. The limitations of this study include participants’ varying levels of existing relationship with the researcher and the implications of collecting data during the COVID-19 social restrictions.
Ecological momentary assessment (EMA) refers to a range of methods used for measuring daily life feelings, events, experiences, and behaviors in real-time, real-world settings [
Among its advantages, app-based noncontact EMA allows for the recruitment of large samples, or those drawn from specific or hard-to-reach populations, more easily than in-person EMA. It is convenient and can reduce or remove time and geographic barriers to participation. The participant burden associated with borrowing, becoming familiar with, and carrying around a supplied research smartphone is eliminated. For researchers, equipment costs are reduced, and allowing participants to use their own personal smartphone means that less training is required, thus saving time. When training is provided, it is more feasible to use a blanket approach (eg, distribution of generic written instructions only) with noncontact EMA than in-person EMA. Finally, noncontact EMA presents an alternative when situational factors, such as the COVID-19 restrictions, prevent in-person research.
However, there are also limitations associated with this method, not least that participants must own a smartphone. Furthermore, EMA schedules can be complex and demanding, and therefore require a certain degree of participant investment and a solid understanding among participants of what they are being asked to do, along with technological proficiency with the smartphone and app. This may be difficult to achieve without an in-person initiation. Finally, in allowing participants to run EMA on their own smartphones, researchers have less control over the study, and the wide variety of smartphone devices on the market makes it difficult to provide technical support to participants when required. These factors may have negative implications for the quality and quantity of the EMA data collected.
One may reasonably expect that previous experience of successful participation in EMA studies (in whatever form) would increase the likelihood of being able or willing to complete a subsequent app-based noncontact EMA study, although to the authors’ knowledge, this has never been explored. In contrast, participants who are new to the EMA method may be disadvantaged by the lack of in-person initiation and subsequently struggle with the noncontact EMA protocol. Regarding the potential effects of age, the high technological demand of EMA may prove challenging for older adults who tend to have lower rates of smartphone ownership, use [
To explore the prevalence of published app-based noncontact EMA research and uncover any evidence relating to the abovementioned effects of age and EMA experience, a literature search of the Scopus database was conducted. The search was conducted in April 2020, using the terms
A meta-analytic approach was used to determine the characteristics of the 24 studies. The median sample size across the studies was 135 participants (range 17-6675). Among the studies which reported mean age and dichotomous male–female gender distribution of the sample, participants tended to be younger (mean 30.9, SD 8.0 years, based on 21 studies) and mostly women (mean 68.7% of the sample, based on 19 studies). The topics of study were smoking [
All studies used noncontact methods of recruitment, initiation, and EMA data collection, although the specific procedures varied across studies. Advertisement was conducted on the web via circulation of study details on social media [
Although the specifics of EMA data collection varied depending on the purpose of the study, all 24 studies required participants to run the app on their own smartphone; some specified that this must be running iOS [
None of the aforementioned 24 studies directly assessed the feasibility of app-based noncontact EMA in comparison with in-person EMA. To the authors’ knowledge, the only study to date to have done so is that of Carr et al [
From the foregoing, it is apparent that much remains unknown regarding the feasibility of app-based noncontact EMA. Specifically, evidence is lacking regarding the method’s feasibility with older adults and with experienced versus naïve EMA participants. Therefore, the research questions are as follows:
How does the feasibility of app-based noncontact EMA compare, for participants who have never participated in EMA before versus participants who have previous in-person EMA experience?
Does older age have an adverse effect on the feasibility of app-based noncontact EMA?
Given the paucity of previous evidence relating to these questions, this study adopted an exploratory approach rather than testing specific hypotheses.
In total, 151 members of our pre-existing participant pool were invited. Recruitment occurred in 2 stages. First, 32.5% (49/151) of people who had participated in an earlier in-person EMA study of listening and daily life fatigue conducted by Burke and Naylor [
Thereafter, an additional sample of 102 participants was invited. They had not participated in the EMA Fatigue study, although 19.6% (20/102) were selected randomly and invited to participate in that study—an overlap that resulted from sampling from the same participant pool for both studies. Recruitment of this
As the aim of recruitment was to recruit as many
Ethical approval was received from the West of Scotland Research Ethics Committee (18/WS/0007) and the National Health Service Research and Development (GN18EN094). This study was not preregistered.
Data were collected from June to August 2020. Email invitations included a participant information sheet and a link to the web-based consent form and baseline questionnaire set. The participants who provided consent and completed the questionnaires were sent a follow-up email containing
The EMA sequence consisted of 7 full days of smartphone surveys. Each day, 5
Telephone
The web-based consent forms and baseline questionnaires were created and administered using the JISC Online Surveys [
Invitees’ age (years), gender (
In all, 3 questionnaires were administered at baseline and are described further. As the study was conducted during the COVID-19 restrictions, some of the questionnaires were modified to better reflect those circumstances. These questionnaires were not directly related to the research questions being examined in this paper; in this context, they merely serve as a procedural step which is often found in EMA studies, namely acquisition of descriptive variables at baseline. As this is a step at which participants may drop out, its presence is relevant for the current purposes. Its content is not relevant, beyond assessing age associations and whether the naïve and experienced groups were equivalent on the variables collected.
An adapted Hearing Handicap Inventory for Adults/the Elderly [
A 1-time start-up survey elicited participants’ employment status (collapsed into
The daytime survey (occurring 5 times per day) elicited self-reports of the type of location, presence or absence of other people, conversational situation, level of background noise, and length of time spent in the situation.
The evening survey consisted of 6 questions regarding participation in and avoidance of social activity, conversational engagement, and hearing difficulty during that day.
All EMA surveys logged the make, model, and operating system of the phone used to respond.
Participants who completed the baseline questionnaires, but failed to proceed further, were emailed a short JISC survey exploring the reasons for noncompletion of the study. Prefixed by “I did not download the app because,” response options were (1) “I did not receive any instructions to do this,” (2) “The instructions looked too complicated,” (3) “I tried to download the app but found it too difficult/did not know how,” (4) “I did not want to,” (5) “I did not have time/I was too busy,” (6) “I did not realize this was part of the study,” (7) “I do not own a smartphone,” (8) “My smartphone would not download the app,” (9) “I have not got around to it yet,” and (10) “Other”. Participants were permitted to select only 1 response option.
The exit interview explored the acceptability of the set-up process (eg, ease of installing the app and completing the start-up survey), study participation (eg, satisfaction with auditory survey notifications and size of question text), procedure for ending the study (eg, data upload status and uninstallation of the app), and general topics (eg, use and usability of instruction guides and reactions of family and friends). In addition, participants with previous EMA experience were asked if they preferred using their own smartphone, as in this study, or a supplied smartphone, as in the EMA Fatigue study.
There were two primary predictors in this study: previous EMA status and age. Participants were classified as experienced EMA participants if they had taken part in the EMA Fatigue study and as naïve EMA participants if they had not. These classifications were corroborated by participant responses to the question which asked if they had previously taken part in any smartphone-based research. Age (years) was treated as a continuous predictor. Interview responses, specifically the high proportion of Android users reporting issues with survey alerts, prompted the post hoc addition of one secondary predictor: phone operating system (coded as
The outcome variables were the markers of the feasibility of the app-based noncontact EMA. These were (1) enrollment rate, represented as the percentage of invitees who enrolled into the study by completing the baseline questionnaire set; (2) completion rate, represented as the percentage of enrollees who ran the EMA sequence for 1 week; (3) reason for noncompletion of the study among enrollees, coded dichotomously as
First, the associations between the experience group and baseline factors were examined using independent-samples 2-tailed
Across all analyses, chi-square tests were only conducted when the minimum cell count in each group was 5. The analyses were conducted using the SPSS Statistics (version 25; IBM Corp). The α level was set to .05, for all analyses.
Participation at each stage of the study. EMA: ecological momentary assessment.
The baseline characteristics and comparisons between the experienced and naïve EMA groups are presented in
Baseline characteristics and differences by experience group.
Characteristics | Experienced EMAa group (n=26)b | Naïve EMA group (n=21)b | Group comparisons | ||
|
|
|
Chi-square ( |
||
Age (years), mean (SD), range | 66.2 (8.6), 45-78 | 65.1 (8.6), 46-75 | N/Ac | −0.45 (45) | .65 |
Gender (male), n (%) | 12 (46) | 8 (38) | 0.31 (1) | N/A | .58 |
Employed, n (%) | 6 (23) | 7 (33) | 0.61(1) | N/A | .44 |
Degree of hearing loss in dB HLd, mean (SD) | 22.7 (14.2) | 22.1 (13.2) | N/A | −0.14 (45) | .89 |
Hearing aids users, n (%) | 9 (35) | 9 (43) | 0.33 (1) | N/A | .56 |
Previous laboratory appointments attended, mean (SD) | 5.2 (2.0) | 3.3 (2.2) | N/A | −3.04 (45) | .004 |
HHIA/Ee score (hearing handicap), mean (SD) | 32.3 (29.7) | 25.0 (17.6) | N/A | −1.04 (39.88) | .30 |
SALf score (social activity), mean (SD) | 1.9 (1.0) | 2.0 (0.9) | N/A | 0.34 (44) | .74 |
TRIg 2.0 score (techno-readiness), mean (SD) | 3.3 (0.7) | 3.2 (0.7) | N/A | −0.14 (44) | .89 |
Android users, n (%) | 15 (58) | 12 (57) | 0 (1) | N/A | .97 |
aEMA: ecological momentary assessment.
bMean (SD) and
cN/A: not applicable.
ddB HL: decibels in hearing level.
eHHIA/E: Hearing Handicap for Adults/the Elderly.
fSAL: social activity log.
gTRI: Technology Readiness Index.
As shown in
Markers of feasibility in the overall sample, experienced ecological momentary assessment (EMA) group, and naïve EMA group.
Characteristics | Overalla | Experiencedb | Naïvec |
N1 (participants receiving study invitation) | 151 | 49 | 102 |
N2 (participants initiating study) | 70 | 31 | 39 |
Marker 1: enrollment rate among invitees (%; N2/N1) | 46.4 | 63 | 38.2 |
N3 (participants launching EMA sequence) | 47 | 26 | 21 |
N4 (participants running the 7-day EMA sequence) | 47 | 26 | 21 |
Marker 2: completion rate among enrollees (%; N4/N2) | 67.1 | 84 | 53.8 |
N5 (participants completing at least one daytime EMA survey) | 45 | 25 | 20 |
N6 (partook in exit interview) | 47 | 26 | 21 |
Marker 4: signal-contingent survey response rate (%), mean (SD) | 61.4 (30.1) | 65.4 (30.7) | 56.3 (29.3) |
Marker 5: reported survey alert issue (n, yes) | 16 | 7 | 9 |
Marker 6a: requested assistance from researcher (n, yes) | 3 | 2 | 1 |
Marker 6b: requested assistance from family or friends (n, yes) | 6 | 2 | 4 |
aOverall sample.
bExperienced EMA group.
cNaïve EMA group.
The response rate on signal-contingent (ie, daytime and evening) EMA surveys (marker 4) ranged from 0% to 100% in the experienced EMA group and from 0% to 90.9% in the naïve EMA group. The response rate (as shown in
In all, 3 participants contacted the researcher for help during the study (marker 6a), all in relation to survey alerts not working, and 6 participants reported asking for help from a family member or friend with some technical aspects of the study (marker 6b). There was too little variation in the responses to assess the effect of past experience on these outcomes.
Unsurprisingly, older age was related to a greater likelihood of being retired (Wald
Age was unrelated to both enrollment rate (marker 1; Wald
The phone operating system was related to response rate (marker 4); in comparison with Android users, participants using iOS returned a higher response rate (mean 74.8%, SD 20.3% vs mean 48.5%, SD 31.4%;
Although both older age and using an Android operating system were related to reporting issues with survey alerts, the mean age of Android users in this study (mean 65.9, SD 7.9 years) was not significantly different from that of iOS users (mean 65.6, SD 9.4 years;
The 26 experienced EMA participants who completed the study were asked if they preferred using their own smartphone, as in this study, or using a supplied smartphone, as in the EMA Fatigue study. Of these 26 participants, 22 (85%) preferred using their own phone, whereas 4 (15%) preferred using the supplied research phone and/or preferred the face-to-face method in general.
This study aimed to assess the feasibility of noncontact EMA. To the authors’ knowledge, this is only the second study to do so, after Carr et al [
Perhaps the most compelling finding from this study is that naïve participants were less likely to enroll in and complete the study than their experienced counterparts. However, confounding effects could not be ruled out. Specifically, experienced EMA participants had previously attended more in-person laboratory appointments than naïve participants, which may indicate a stronger degree of existing relationship with the researcher or greater willingness to participate in research, among these participants. Future research would benefit from the measurement and tight control of these factors. The observed higher rate of attrition among naïve participants between the baseline survey and EMA phase is concerning. This may indicate technological difficulties in downloading the app or a lack of understanding of the study demands. A low response to the follow-up survey among dropouts means that little information has been gathered regarding the reasons for attrition; therefore, the feasibility of app-based noncontact EMA in this respect remains unclear. Notably, age was unrelated to both enrollment and completion rates, suggesting that the feasibility of this method is not sensitive to age (within the range studied here; 45-78 years).
Turning to the responses to individual survey alerts, it is notable that the response rate was unrelated to both previous EMA experience and age. At 61.4%, the response rate in this study corresponds closely to the 65% mean of response rates across other noncontact EMA studies [
Finally, a sizable number of participants reported that they were not alerted to some or all EMA survey alerts, and their response rates were unsurprisingly lower than those returned by unaffected participants. Whether participants truly experienced a technical issue, or were just unaware of alerts, cannot be determined by the data. However, older participants were disproportionately affected by this issue. These findings undermine the feasibility of app-based noncontact EMA and suggest that supplying participants with a preprogramed smartphone (which would typically involve an in-person initiation session) yields better results in terms of data quantity. However, participants in this study who had experience using both their own smartphone and a supplied research smartphone for EMA overwhelmingly preferred using their own. Future app-based EMA research (both in-person and noncontact) should consider giving participants the option to use either their own smartphone or a supplied smartphone where possible [
In addition to caveats mentioned above with respect to specific results, there are several more wide-ranging factors which may limit the reliability and generalizability of this study’s results.
First, all participants had working email accounts, possibly indicating some level of technological competence. In support of this, the mean TRI 2.0 score obtained by this sample was slightly higher than that reported by the Parasuraman and Colby [
Limited variation and lack of data pertaining to some of the markers of feasibility in this study suggest that they are not useful outcomes. Furthermore, despite attempts to explore the reasons for noncompletion of the study by sending a follow-up survey to the dropouts, only few participants responded. Even less is known about why individuals declined to participate in this study in the first place. Crucially, the smartphone ownership status of this group is not known, a key factor in terms of feasibility.
Finally, our method was noncontact insofar as participants did not attend in-person laboratory sessions; however, they did receive detailed instructions to guide them through the study and were contacted several times by email during recruitment, initiation, and enrollment, as detailed in the
This study assessed the feasibility of app-based noncontact EMA as a function of past EMA experience and age. Experienced EMA participants were more likely to enroll in and complete the study than naïve participants, whereas age was unrelated to both enrollment and completion rates. The response rate was acceptable and unrelated to both experience and age, but Android users returned markedly lower response rates than iOS users. Although a sizable number of (mostly older) participants reported in exit interviews that they were not always alerted to surveys, very few informed or sought assistance from the researchers during data collection. In summary, app-based noncontact EMA is feasible, although consideration should be given to how to increase enrollment and completion rates, especially when recruiting participants who are new to EMA.
ecological momentary assessment
Technology Readiness Index (version 2.0)
The authors wish to thank the participants who took part in this study, Pat Howell for his support in designing and running the study, and Andrew Lavens for his role in participant recruitment and the creation and distribution of web-based surveys. This work was supported by the Medical Research Council [grant number MR/S003576/1]; and the Chief Scientist Office of the Scottish Government.
None declared.