Original Paper
Abstract
Background: User engagement is crucial for digital therapeutics (DTx) effectiveness; due to variations in the conceptualization of engagement and intervention design, assessment and retention of engagement remain challenging.
Objective: We investigated the influence of the perceived acceptability of experimental intervention components and satisfaction with core intervention components in DTx on user engagement, while also identifying potential barriers and facilitators to user engagement.
Methods: We conducted a mixed methods study with a 2 × 2 factorial design, involving 12 outpatients with atopic dermatitis. Participants were randomized into 4 experimental groups based on push notification (“basic” or “advanced”) and human coach (“on” or “off”) experimental intervention components. All participants engaged in self-monitoring and learning courses as core intervention components within an app-based intervention over 8 weeks. Data were collected through in-app behavioral data, physician- and self-reported questionnaires, and semistructured interviews assessed at baseline, 4 weeks, and 8 weeks. Descriptive statistics and thematic analysis were used to evaluate user engagement, perceived acceptability of experimental intervention components (ie, push notification and human coach), satisfaction with core intervention components (ie, self-monitoring and learning courses), and intervention effectiveness through clinical outcomes.
Results: The primary outcome indicated that group 4, provided with “advanced-level push notifications” and a “human coach,” showed higher completion rates for self-monitoring forms and learning courses compared to the predetermined threshold of clinical significance. Qualitative data analysis revealed three key themes: (1) perceived acceptability of the experimental intervention components, (2) satisfaction with the core intervention components, and (3) suggestions for improvement in the overall intervention program. Regarding clinical outcomes, the Perceived Stress Scale and Dermatology Life Quality Index scores presented the highest improvement in group 4.
Conclusions: These findings will help refine the intervention and inform the design of a subsequent randomized trial to test its effectiveness. Furthermore, this design may serve as a model for broadly examining and optimizing overall engagement in DTx and for future investigation into the complex relationship between engagement and clinical outcomes.
Trial Registration: Clinical Research Information Service KCT0007675; http://tinyurl.com/2m8rjrmv
doi:10.2196/51225
Keywords
Introduction
Digital Therapeutics in General
With the rapid advancement of digital technology, digital therapeutics (DTx) have emerged as a promising approach to either enhance the value of conventional health care delivery systems or have the potential to substantially substitute the existing system [
]. DTx refers to “an evidence-based intervention that is driven by high-quality software programs to prevent, manage, or treat a disease or disorder” [ ]. Using technology and data analytics, DTx holds numerous benefits in health care: (1) it can encompass a wide range of physical and mental health conditions (mostly chronic) like diabetes, oncology treatment management, insomnia, attention-deficit/hyperactivity disorder (ADHD), and substance use disorder [ ]; (2) it can provide personalized care with data-driven treatment options [ ]; and (3) it can reduce health care costs [ ]. Given these significant potential benefits, it is crucial to understand how the efficacy of DTx in therapy can be improved. To achieve such improvement, diverse and comprehensive research regarding the DTx development process should be conducted to successfully implement and optimize these promising interventions.User Engagement Issues in DTx
It is widely acknowledged that user engagement is important for improving the effectiveness of DTx [
]. Engagement in DTx can be defined as “the extent (eg, amount, frequency, duration, and depth) of use and subjective experience characterized by attention, interest, and affect” [ , ]. Although user engagement significantly impacts the effectiveness of DTx, assessing and retaining it is challenging. The possible reasons for this may include (1) a lack of shared awareness regarding the useful perception of engagement, (2) engagement in DTx is not a stationary but a dynamic process, and (3) it is a multifaceted construct, capturing the user’s behavior, cognitive, and emotional states. Several systematic reviews have investigated DTx intervention components (eg, self-monitoring, reminders, and rewards) that are linked with higher engagement [ , ]. However, the findings of these studies do not provide conclusive evidence about the intervention components that help patients become more engaged with the DTx. This occurs due to substantial variation in the definition of engagement and intervention design in DTx. Thus, an in-depth analysis of the intervention components and a concrete definition of user engagement should be established, particularly during the design phase of DTx.Methods for Evaluating Intervention Components in Digital Intervention
For systematically evaluating how intervention design influences user engagement, the optimization methods from the multiphase optimization strategy (MOST) can be used with a couple of representative intervention components from a wide range of possible options. MOST allows for efficient testing through a randomized experiment, including a factorial experiment, which allows for the simultaneous examination of different intervention design factors [
]. Many recent studies, however, used only traditional randomized controlled trials (RCTs) as the primary study design to test the efficacy of the intervention as a package [ - ] and to examine the relationships between engagement level and clinical outcomes through post hoc analysis [ , , ]. Using only RCTs as an evaluation design may pose some challenges to the effective evaluation of DTx, as they are considered complex, context-dependent, and individually tailored interventions that purport to maximize its effectiveness [ , ]. Thus, additional evaluation methods for DTx, such as adaptive study designs (eg, sequential multiple assignment randomized trial and factorial trial from MOST), must be considered to provide robust evidence during its design and development phases.Aims of This Study
Here, we aimed to examine the impact of the perceived acceptability of the experimental intervention components (ie, push notification and human coach) and satisfaction with the core intervention components (ie, self-monitoring and learning courses) in DTx on user engagement (
). We used “Atomind,” a DTx for patients with atopic dermatitis (AD), developed for clinical trial purposes, with a primary focus on optimization as a refinement process before validating its effectiveness through larger RCTs. This was a proof-of-concept study with an experimental 2 × 2 factorial design, using both quantitative (eg, in-app behavioral data) and qualitative (eg, semistructured interviews) assessment methods. We hypothesized that those who received the advanced level of each experimental intervention component would pass the threshold of clinical significance of user engagement metrics in DTx. Moreover, the qualitative analysis of satisfaction with the core intervention component would identify the potential barriers and facilitators to user engagement. This study could also inform how to optimize and evaluate other DTx in this field.Methods
Study Design
This full factorial experiment had 2 experimental intervention components (
), each of which was implemented at 2 different levels: push notification (“basic” or “advanced”) and human coach (“on” or “off”). Participants were randomly allocated to 1 of the 4 experimental groups in the 2 × 2 full factorial design. All participants engaged in self-monitoring and learning courses as core intervention components during the 8-week intervention period. We applied a mixed methods approach by collecting quantitative (eg, surveys) and qualitative (eg, semistructured interviews) data to examine the perceived acceptability of experimental intervention components, satisfaction with the core intervention components, and suggestions for improvement in the overall intervention program. We conducted the interviews after 8 weeks of treatment.Experimental Intervention Components
Push Notification
Participants randomized to “basic-level push notification” received basic push notifications that encouraged users to log in and complete tasks at time points chosen by users. Participants randomized to “advanced-level push notification” received not only basic push notifications but also additional push notifications when they did not complete in-app self-monitoring forms, weekly classes, or missions after receiving the basic push notifications. Additional push notifications contained emotionally supportive phrases (eg, “It’s a bit annoying, right? But don’t forget that sustained use of the app can help reduce your symptoms.” And, “Malang is waiting for <username>! Haven’t you finished the class yet? Don’t give up and let’s start!”). Push notifications are classified into 4 different categories: self-monitoring, learning course, mission, and personalized feedback report. An overview of the 2 groups’ push notifications is presented in
.Human Coach
Participants with this experimental intervention component turned “on” received tailored guidance and assistance from a human coach. The coach sent weekly motivational messages to maintain participants’ engagement through a different digital application called KakaoTalk, which is the most popular instant messaging app in South Korea, with 94% of the entire Korean population as users. The coach spent a total of 6 hours a week—2 hours a day over 3 days—to manage the participants. The coach kept the participants motivated, held them responsible, provided feedback, and monitored their progress to keep them on track. Participants could address difficulties or questions they encountered with the app through 2-way communication. Besides the app’s information, participants could also ask questions about skin health and mental well-being and receive answers from the coach. Conversely, participants with this experimental intervention component turned “off,” received nothing, and conducted self-care.
Participants
All participants were outpatients who met the eligibility criteria, including individuals who (1) were aged 19 years or older and had mild to severe AD, (2) were able to understand verbal and written Korean, and (3) had their own smartphone. Participants who met the eligible criteria were assigned randomly to 4 experimental groups in a 1:1:1:1 ratio using program IDs generated within the Atomind app.
Intervention
Atomind is an app-based intervention program that helps individuals manage skin conditions and AD symptoms. It was developed by Huray Inc, South Korea (
). The app’s content is based on cognitive behavioral theory (CBT) and a mindfulness approach to support healthy behavioral habits and regulate negative emotions. The app prompts users to complete in-app self-monitoring forms on a daily, weekly, and monthly basis, focusing on motivation, skin condition, behavioral change, and mental health. Weekly videos demonstrate educational information that can help relieve AD symptoms and CBT strategies for regulating negative thoughts and emotions. After watching the video, users were asked to demonstrate their understanding by passing a postquiz. The overall topic of the weekly video is listed in . Moreover, missions are provided to help users apply their newly acquired skills in real life. Users can access personalized graphic feedback based on their self-monitoring.Outcomes
Outcome measures were collected by using (1) in-app behavioral data, (2) physician- or self-reported questionnaires, and (3) semistructured interviews. At baseline, participants were asked to complete a demographic questionnaire pertaining to their age, gender, educational level, and health-related measures (medical and family health history, health literacy, etc).
The primary outcome was the user engagement of the intervention, measured by in-app behavioral data on core intervention components, including percentages of self-monitoring forms and learning courses completed. We collected qualitative data on the perceived acceptability of experimental intervention components (ie, push notification and human coach), satisfaction with the core intervention components (ie, self-monitoring and learning courses), and suggestions on any improvement for the overall intervention program through semistructured interviews. The interviews were conducted over the telephone by 2 research team members after 8 weeks of intervention. A semistructured interview guide (
) was used to guide the interviews, lasting 15-20 minutes for each.Furthermore, other clinical outcome measures were assessed at baseline, 4 weeks, and 8 weeks of intervention. Designated dermatologists assessed the severity of AD using the eczema area and severity index (EASI), including the severity of 4 signs (erythema, edema or papulation, excoriation, and lichenification; range 0-72) [
]. Atopic eczema severity reported by patients was measured with the patient-oriented eczema measure (POEM; range 0-28), a 7-item questionnaire for monitoring the care of patients with atopic eczema [ ]. Insomnia severity was measured with the insomnia severity index (ISI; range 0-28), a 7-item questionnaire assessing perceived insomnia severity using a Likert-type scale [ ]. Perceived stress level was measured with the perceived stress scale (PSS; range 0-40), a 10-item questionnaire assessing psychological stress [ ]. Quality of life was measured by the dermatology life quality index (DLQI; range 0-30), a 10-item questionnaire assessing how much the patients’ skin problems have affected their lives over the past week [ ], and fear of negative evaluation was measured using the brief fear of negative evaluation (BFNE; range 12-60) scale, which is a 12-item questionnaire assessing the degree of anxiety about perceived negative evaluation [ ]. The assessment methods and assessment period for each measurement are shown in .Statistical Analysis
Descriptive statistics were used to analyze quantitative data, including in-app behavioral data and clinical outcomes. We initially recruited and enrolled 12 participants, with 3 individuals for each group; however, of the initial 12 individuals, 3 participants (1 in group 2, 1 in group 3, and 1 in group 4) were excluded from the analysis due to medication changes during the intervention period.
We set the threshold of clinical significance (TCS) for user engagement, considering the period of each assessment. Previous research with larger sample sizes has shown that individuals with high efficacy typically maintain an engagement rate between 50% and 80% [
, ]. However, given the smaller sample size in this proof-of-concept study, a more stringent approach has been applied in setting the TCS for user engagement. For self-monitoring, the TCS is determined if the average completion rate of self-monitoring forms is ≥90%. For learning courses, the TCS is set if the average completion rate of learning courses is ≥80% throughout the intervention period.The perceived acceptability of each experimental intervention component and the satisfaction of each core intervention component were also examined by semistructured interviews. Qualitative data were analyzed using thematic analysis. The verbatim transcriptions of the interviews were used to extract the responses, which were categorized into items focusing on the perceived acceptability of the experimental intervention components, satisfaction of the core intervention components, and suggestions on any improvement for the overall intervention program.
To measure the interventions’ effectiveness, we assessed the changes in the average of clinical outcomes (eg, EASI, POEM, ISI, PSS, DLQI, and BFNE) before and after the intervention in the 4 groups and the 2 different levels of each experimental intervention component.
Ethical Considerations
All study activities were conducted in adherence to ethical standards and received approval from the institutional review boards of the organizing sites, including Severance Hospital (4-2022-0922), Wonju Severance Christian Hospital (CR322035), and Bundang Cha Hospital (CHAMC 2022-05-005-001). The trial was registered on the Clinical Research Information Service (KCT0007675). Participants provided voluntary, written, and informed consent after a thorough explanation of the clinical trial. Privacy measures included data anonymization and secure storage. Participants received US $80 in compensation for their contribution, a detail communicated during the informed consent process.
Results
Sample Characteristics
A total of 12 adults (mean age 31.1 years; range 20-43 years) were recruited between August and November 2022 (
). Of the 12 participants, 2 (17%) had mild AD, 7 (58%) had moderate AD, and 3 (25%) had severe AD. More details regarding the sample characteristics are presented in .Characteristics | Frequency, n (%) | ||
Sex | |||
Female | 5 (42) | ||
Male | 7 (58) | ||
Age (years) | |||
20-29 | 5 (42) | ||
30-39 | 5 (42) | ||
40-49 | 2 (17) | ||
Education level | |||
High school graduate or less | 4 (33) | ||
Currently enrolled in or graduated from college | 7 (58) | ||
Currently enrolled in or graduated from graduate school | 1 (8) | ||
Severity of atopic dermatitis | |||
Mild | 2 (17) | ||
Moderate | 7 (58) | ||
Severe | 3 (25) | ||
Duration of disease (years) | |||
≤10 | 1 (8) | ||
11-20 | 4 (33) | ||
21-30 | 6 (50) | ||
>30 | 1 (8) | ||
Comorbidity of other allergic diseases | |||
Atopic dermatitis only | 3 (25) | ||
Comorbid with other allergic diseases | 9 (75) | ||
Family history of allergic diseases | |||
Atopic dermatitis | 4 (33) | ||
Allergic rhinitis | 4 (33) | ||
Food allergy | 1 (8) | ||
None | 3 (25) | ||
Alcohol consumption frequency over the past year | |||
Not at all in the past year | 4 (33) | ||
Less than once a month | 1 (8) | ||
About once a month | 1 (8) | ||
2-4 times a month | 2 (17) | ||
2-3 times a week | 4 (33) | ||
Health literacy | |||
Health literacy (score 15 out of 15) | 9 (75) |
Primary Outcome
Regarding the user engagement rates among different groups (
A and 4B), groups 2 (90.9%), 3 (95.5%), and 4 (97%) showed higher completion rates for self-monitoring compared to the predetermined TCS (90%). Additionally, groups 2 (83.3%) and 4 (91.7%) had higher completion rates for learning courses than the TCS (80%). These results indicate that group 4, provided with advanced-level push notifications and a human coach, had the highest user engagement during the intervention.As shown in
C and 4D, “advanced-level push notification” (93.9%) and “human coach on” (96.2%) were the experimental intervention components that exceeded the predetermined TCS for self-monitoring (90%). The experimental intervention components that exceeded the TCS for learning courses (80%) were also “advanced-level push notification” (87.5%) and “human coach on” (85.4%). Overall, “advanced-level push notification” and “human coach on” demonstrated the highest user engagement among the experimental intervention components.Secondary Outcome
Qualitative data were organized into three key themes: (1) perceived acceptability of the experimental intervention components, (2) satisfaction of the core intervention components, and (3) suggestions for improvement in the overall intervention program.
presents all themes and subthemes with corresponding quotes.Themes, subthemes, and components | Verbatim examples | ||||||
Key theme 1: perceived acceptability of the experimental intervention components | |||||||
Push notification component | |||||||
Technical aspect | |||||||
Basic push notification |
| ||||||
Advanced push notification |
| ||||||
Content aspect | |||||||
Basic push notification |
| ||||||
Advanced push notification |
| ||||||
Human coach component | |||||||
Technical aspect | |||||||
Human coach off |
| ||||||
Human coach on |
| ||||||
Content aspect | |||||||
Human coach off |
| ||||||
Human coach on |
| ||||||
Key theme 2: satisfaction with the core intervention components | |||||||
Self-monitoring | |||||||
Building health habits |
| ||||||
Learning course | |||||||
Acquiring reliable information |
| ||||||
Key theme 3: suggestions for improvement on the overall intervention program | |||||||
Diversity of daily self-monitoring form questions |
| ||||||
Burdensomeness of self-monitoring feature |
| ||||||
Not tailored contents |
| ||||||
Motivating factors |
| ||||||
Technical issues |
|
Key Theme 1: Perceived Acceptability of the Experimental Intervention Components
Perceived acceptability was measured using the components’ technical and content aspects. Regarding the technical aspect of the push notification component, 60% (3/5) participants receiving “basic-level push notifications” responded that they would like the push notification frequency to increase. Moreover, 20% (1/5) responded that it would be better to select the time and frequency of the push notifications and be reminded if they did not complete the task. And 50% (2/4) receiving “advanced-level push notifications” were overall satisfied with the current push notification frequency. Regarding the content aspect of this component, both groups responded that they were satisfied with the provided notification contents. However, 20% (1/5) of participants in the group receiving “basic-level push notifications” suggested that it would be helpful to receive a push notification reminding them to take medication or apply some moisturizer.
Regarding the technical aspect of the human coach component, 40% (2/5) participants assigned to the “human coach off” component requested a 1:1 communication channel within the app, as they could not receive assistance from a human coach. A total of 75% (3/4) of participants assigned to the “human coach on” component preferred to have an in-app communication channel rather than using a different instant messaging app (ie, KakaoTalk) for communication with a human coach. And 25% (1/4) of participants also suggested integrating a community feature for patients to communicate with each other. Regarding the content aspect of the human coach component, 20% (1/5) of participants assigned to the “human coach off” suggested adding a telehealth feature for emergencies. And, 25% (1/4) participants assigned to the “human coach on” preferred the coach asking specific questions related to symptom management, such as “Have you taken your medicine today?” or “Have you visited the hospital?” rather than the questions relevant to the app use, like “Is there anything difficult or uncomfortable while using the app?”
Key Theme 2: Satisfaction With the Core Intervention Components
Satisfaction was measured for each core intervention component, self-monitoring, and learning courses. Regarding the self-monitoring component, 78% (7/9) of participants reported that self-monitoring helped build health habits, including better medication adherence, reduced scratching behavior, and consistent use of moisturizers. Moreover, they could easily track their symptoms through weekly reports, which helped them monitor their symptoms over time. Regarding the learning course component, 56% (5/9) of participants indicated that they acquired reliable information through weekly videos.
Key Theme 3: Suggestions for Improvement on Overall Intervention Program
Suggestions for improvement in the overall intervention program were divided into 5 subthemes. First of all, 33% (3/9) of participants recommended diversifying the questions of the daily self-monitoring form, as they found them to be repetitive and lacking in variation. Second, 22% (2/9) of participants found the self-monitoring burdensome as they had to upload lesion pictures daily. Third, 44% (4/9) of participants felt the learning course was not sufficiently tailored to their needs. They found the video content insufficiently helpful for patients with severe disease; the postquiz questions were unchallenging; and the video was too long. Fourth, 11% (1/9) of participants suggested adding motivating factors to the intervention program to make them more engaged with the app. Lastly, technical issues within the app were mentioned. A total of 33% (3/9) of participants recommended improving its performance, such as fixing bugs in the self-monitoring feature, reducing duplicate push notifications, and improving video sound quality.
Clinical Outcomes
Descriptive statistics, for example, mean (SD), were used to analyze clinical outcomes by group and experimental intervention component. The ISI score showed the greatest improvement in group 2 (mean change –4.50, SD 6.36). The EASI, POEM, and BFNE scores showed the highest improvement in group 3 (mean change –10.20, SD 9.90, mean change –3.00 score, SD 15.56, and mean change –4.50, SD 6.36, respectively). The PSS and DLQI scores presented the greatest improvement in group 4 (mean change –3.50, SD 3.54, and mean change –6.00, SD 11.32, respectively).
Regarding the push notification component, the ISI, PSS, DLQI, and BFNE scores showed the highest improvement in the “advanced-level push notification” component (mean change –0.25, SD 6.34, mean change –1.75, SD 2.99, mean change –4.75, SD 8.06, and mean change –3.00, SD 3.46, respectively). Regarding the human coach component, the EASI, POEM, PSS, and BFNE scores presented the highest improvement in the “human coach on” component (mean change –6.73, SD 7.41, mean change –1.50, SD 11.73, mean change –3.00, SD 2.94, and mean change –4.25, SD 3.69, respectively). More detailed results can be found in
.Discussion
Principal Findings
Our primary objective was to investigate how the perceived acceptability of experimental intervention components and satisfaction with core intervention components affect user engagement in DTx. We examined in-app behavioral data on core intervention components (ie, percentages of self-monitoring forms and learning courses completed) as a user engagement metric. As hypothesized, the TCS of user engagement was achieved in group 4, where all 2 experimental factors were advanced simultaneously. Furthermore, clinical outcomes related to the mental health of patients with AD improved in group 4. This study identified potential barriers and facilitators of user engagement through semistructured interviews on the patients’ satisfaction with core intervention components. Overall, our analysis of Atomind data suggests that incorporating advanced-level push notifications with a human coach, tailoring contents with various self-monitoring tools, and implementing some motivational factors (eg, rewards) may improve user engagement.
Comparison With Previous Work
To the best of our knowledge, this is the first study to examine the impact of different levels of push notification, human coach, and satisfaction with core intervention components on user engagement in DTx using mixed methods. Although there is a proliferation of clinical research on user engagement with mobile health apps, the majority only conducted traditional RCTs [
- ] or optimization trials with a single type of assessment method [ - ]. The findings from these earlier studies with traditional RCTs only explained how the intervention as a package affected user engagement; they could not identify the specific intervention elements that impacted it [ ]. Additionally, only assessing quantitative data from optimization trials (eg, factorial experiments) limits the understanding of barriers and facilitators affecting user engagement [ , ]. In contrast, this study clearly showed that advanced-level push notifications and communication with a human coach are the main factors enhancing user engagement. Furthermore, our qualitative analysis showed that advanced-level push notifications were sufficient in frequency to serve as a reminder in busy daily lives, and their content was concise enough to be acceptable. Although communication with a human coach improved user engagement, our qualitative findings suggest that the human coach platform should have been implemented in the internal system of the Atomind app with more diverse questions and detailed responses. Using a mixed methods approach to assess various factors contributing to user engagement in Atomind enabled us to gain insights into the “what, how, and why” of this phenomenon, which is critical to figuring out what steps must be taken to improve an intervention.Establishing a TCS for user engagement has been applied, as this study is a proof-of-concept study with a small sample size. This approach allows for resource-efficient research with clear go-or-no-go decision-making, lowering the risk of confirmatory bias [
]. Concerning TCS determination, each previous study had its own logic established and multiple metrics to account for user engagement [ , ]. This is because user engagement is a multifaceted concept with no universal consensus on how to perceive it [ , , ]. Among the various metrics of user engagement from previous research, the completion of specific activities or modules of the intervention was the most commonly used metric for user engagement [ - , , , , ]. Similarly, we measured user engagement in the app by assessing the completion rate of the core intervention components. In this study, self-monitoring is for daily activity, while learning courses are for weekly activity. Thus, we set up different levels of TCS for each activity to assess user engagement; the completion rate was 90% for self-monitoring forms and 80% for learning courses.Regarding the clinical outcomes from this study, people who received the advanced level of experimental intervention components saw improvement in the majority of psychological symptoms (eg, stress, quality of life, and fear of negative evaluation), which was more than the physical symptoms related to AD. These findings correspond with previous research that suggests digital interventions should focus mainly on improving mental health conditions to support better physical health conditions [
, ]. This trend is caused by several inherent factors of mental health interventions, including the stigma associated with mental problems and diagnosis-specific barriers to accessing mental health services [ ]. Likewise, Atomind is a digital intervention for patients with AD that encourages healthy behaviors and mental health conditions for effective symptom management. Thus, improving psychological measures by engaging with Atomind indicates that it achieved the intended proximal outcome.Limitations and Future Directions
First, the statistical power of this study is insufficient to determine significant effects before and after the intervention. However, setting reasonable TCS for quantitative data and collecting qualitative data will support our findings on DTx optimization for use in well-powered RCTs. Second, the Atomind app is only available for use on the Android operating system. To overcome this limitation, we provided Android smartphones during the intervention period to those (n=5) who had other operating systems on their smartphones. Despite this effort, the user experience with Atomind, which is closely related to user engagement, may be affected. Lastly, technical issues with the app occurred frequently during the intervention period, which may affect user engagement. As Atomind was in the development phase, these problems could have taken place; however, its technical system should be improved in a later version and used for future clinical research.
Conclusions
This proof-of-concept, mixed methods study with an experimental 2 × 2 factorial design demonstrates the impact that perceived acceptability of experimental intervention components and satisfaction with core intervention components in DTx have on user engagement. The findings will be used to refine the intervention and inform the design of the next RCT to test its effectiveness. Furthermore, this research design may serve as a model for how to examine and optimize overall engagement in DTx in broad terms; it will help future research investigate the complex relationship between engagement and clinical outcomes.
Acknowledgments
This research was supported by the Seoul R&BD Program (grant BT210048; project name: Development and Demonstration of a Digital Therapeutics Platform Service for Atopic Dermatitis Treatment) through the Seoul Business Agency, funded by the Seoul Metropolitan Government.
Data Availability
The data sets generated and/or analyzed during this study are not publicly available due to the need to maintain privacy and confidentiality, but are available from the corresponding author on reasonable request. Requests for access to specific data points or additional information will be considered on a case-by-case basis.
Authors' Contributions
MK and JS conceptualized and developed the study’s design. MK provided the intellectual framework for this research. EHC, JUS, and TGK were in charge of the recruitment and data collection of participants. JO and BS served as human coaches, providing guidance and assistance to the participants. HL and JYS conducted interviews and handled the analysis of qualitative data. HL and MK contributed significantly to the data analysis and interpretation. HL and MK wrote the manuscript and edited its contents. MK and JS conducted a thorough review of the manuscript. All authors approved the final version of the manuscript for submission.
Conflicts of Interest
None declared.
Overview of push notifications between two groups.
DOCX File , 17 KBAtomind app sample screen.
DOCX File , 758 KBOverall topics of weekly videos.
DOCX File , 15 KBSemi-structured interview guideline.
DOCX File , 17 KBAssessment methods and assessment period for each measurement.
DOCX File , 15 KBReferences
- Dang A, Arora D, Rane P. Role of digital therapeutics and the changing future of healthcare. J Family Med Prim Care. 2020;9(5):2207-2213. [FREE Full text] [CrossRef] [Medline]
- Understanding DTx—what is a DTx? Digital Therapeutics Alliance. 2023. URL: https://dtxalliance.org/ [accessed 2024-01-05]
- Hong JS, Wasden C, Han DH. Introduction of digital therapeutics. Comput Methods Programs Biomed. 2021;209:106319. [CrossRef] [Medline]
- Wongvibulsin S, Frech TM, Chren MM, Tkaczyk ER. Expanding personalized, data-driven dermatology: leveraging digital health technology and machine learning to improve patient outcomes. JID Innov. 2022;2(3):100105. [FREE Full text] [CrossRef] [Medline]
- Khirasaria R, Singh V, Batta A. Exploring digital therapeutics: the next paradigm of modern health-care industry. Perspect Clin Res. 2020;11(2):54-58. [FREE Full text] [CrossRef] [Medline]
- Kim M, Yang J, Ahn WY, Choi HJ. Machine learning analysis to identify digital behavioral phenotypes for engagement and health outcome efficacy of an mHealth intervention for obesity: randomized controlled trial. J Med Internet Res. 2021;23(6):e27218. [FREE Full text] [CrossRef] [Medline]
- Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. 2017;7(2):254-267. [FREE Full text] [CrossRef] [Medline]
- Saleem M, Kühne L, De Santis KK, Christianson L, Brand T, Busse H. Understanding engagement strategies in digital interventions for mental health promotion: scoping review. JMIR Ment Health. 2021;8(12):e30000. [FREE Full text] [CrossRef] [Medline]
- Kelders SM, Kok RN, Ossebaard HC, Van Gemert-Pijnen JEWC. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res. 2012;14(6):e152. [FREE Full text] [CrossRef] [Medline]
- Schubart JR, Stuckey HL, Ganeshamoorthy A, Sciamanna CN. Chronic health conditions and internet behavioral interventions: a review of factors to enhance user engagement. Comput Inform Nurs. 2011;29(2):81-92. [FREE Full text] [CrossRef] [Medline]
- Collins LM, Murphy SA, Nair VN, Strecher VJ. A strategy for optimizing and evaluating behavioral interventions. Ann Behav Med. 2005;30(1):65-73. [FREE Full text] [CrossRef] [Medline]
- Katula JA, Dressler EV, Kittel CA, Harvin LN, Almeida FA, Wilson KE, et al. Effects of a digital diabetes prevention program: an RCT. Am J Prev Med. 2022;62(4):567-577. [FREE Full text] [CrossRef] [Medline]
- Kim M, Kim Y, Go Y, Lee S, Na M, Lee Y, et al. Multidimensional cognitive behavioral therapy for obesity applied by psychologists using a digital platform: open-label randomized controlled trial. JMIR Mhealth Uhealth. 2020;8(4):e14817. [FREE Full text] [CrossRef] [Medline]
- Leightley D, Williamson C, Rona RJ, Carr E, Shearer J, Davis JP, et al. Evaluating the efficacy of the drinks: ration mobile app to reduce alcohol consumption in a help-seeking Military Veteran population: randomized controlled trial. JMIR Mhealth Uhealth. 2022;10(6):e38991. [FREE Full text] [CrossRef] [Medline]
- Selaskowski B, Steffens M, Schulze M, Lingen M, Aslan B, Rosen H, et al. Smartphone-assisted psychoeducation in adult attention-deficit/hyperactivity disorder: a randomized controlled trial. Psychiatry Res. 2022;317:114802. [CrossRef] [Medline]
- Geramita EM, Belnap BH, Abebe KZ, Rothenberger SD, Rotondi AJ, Rollman BL. The association between increased levels of patient engagement with an internet support group and improved mental health outcomes at 6-month follow-up: post-hoc analyses from a randomized controlled trial. J Med Internet Res. 2018;20(7):e10402. [FREE Full text] [CrossRef] [Medline]
- Zimmermann G, Venkatesan A, Rawlings K, Scahill MD. Improved glycemic control with a digital health intervention in adults with type 2 diabetes: retrospective study. JMIR Diabetes. 2021;6(2):e28033. [FREE Full text] [CrossRef] [Medline]
- Hrynyschyn R, Prediger C, Stock C, Helmer SM. Evaluation methods applied to digital health interventions: what is being used beyond randomised controlled trials?-A scoping review. Int J Environ Res Public Health. 2022;19(9):5221. [FREE Full text] [CrossRef] [Medline]
- Kim M, Patrick K, Nebeker C, Godino J, Stein S, Klasnja P, et al. The Digital Therapeutics Real World Evidence Framework: An approach for guiding evidence-based DTx design, development, testing, and monitoring. JMIR Preprints. Preprint posted online May 21, 2023. [CrossRef]
- Hanifin JM, Thurston M, Omoto M, Cherill R, Tofte SJ, Graeber M. The Eczema Area and Severity Index (EASI): assessment of reliability in atopic dermatitis. EASI Evaluator Group. Exp Dermatol. 2001;10(1):11-18. [CrossRef] [Medline]
- Charman CR, Venn AJ, Williams HC. The patient-oriented eczema measure: development and initial validation of a new tool for measuring atopic eczema severity from the patients' perspective. Arch Dermatol. 2004;140(12):1513-1519. [FREE Full text] [CrossRef] [Medline]
- Bastien CH, Vallières A, Morin CM. Validation of the insomnia severity index as an outcome measure for insomnia research. Sleep Med. 2001;2(4):297-307. [CrossRef] [Medline]
- Lee EH. Review of the psychometric evidence of the perceived stress scale. Asian Nurs Res (Korean Soc Nurs Sci). 2012;6(4):121-127. [FREE Full text] [CrossRef] [Medline]
- Finlay AY, Khan GK. Dermatology Life Quality Index (DLQI)--a simple practical measure for routine clinical use. Clin Exp Dermatol. 1994;19(3):210-216. [CrossRef] [Medline]
- Weeks JW, Heimberg RG, Fresco DM, Hart TA, Turk CL, Schneier FR, et al. Empirical validation and psychometric evaluation of the brief fear of negative evaluation scale in patients with social anxiety disorder. Psychol Assess. 2005;17(2):179-190. [CrossRef] [Medline]
- Kiadaliri A, Dell'Isola A, Lohmander LS, Hunter DJ, Dahlberg LE. Assessing the importance of predictors of adherence to a digital self‑management intervention for osteoarthritis. J Orthop Surg Res. 2023;18(1):97. [FREE Full text] [CrossRef] [Medline]
- Zeng Y, Guo Y, Li L, Hong YA, Li Y, Zhu M, et al. Relationship between patient engagement and depressive symptoms among people living with HIV in a mobile health intervention: secondary analysis of a randomized controlled trial. JMIR Mhealth Uhealth. 2020;8(10):e20847. [FREE Full text] [CrossRef] [Medline]
- Anan T, Kajiki S, Oka H, Fujii T, Kawamata K, Mori K, et al. Effects of an artificial intelligence-assisted health program on workers with neck/shoulder pain/stiffness and low back pain: randomized controlled trial. JMIR Mhealth Uhealth. 2021;9(9):e27535. [FREE Full text] [CrossRef] [Medline]
- Batterham PJ, Calear AL, Sunderland M, Kay-Lambkin F, Farrer LM, Christensen H, et al. A brief intervention to increase uptake and adherence of an internet-based program for depression and anxiety (enhancing engagement with psychosocial interventions): randomized controlled trial. J Med Internet Res. 2021;23(7):e23029. [FREE Full text] [CrossRef] [Medline]
- Beleigoli A, Andrade AQ, De Fatima Diniz M, Ribeiro AL. Personalized web-based weight loss behavior change program with and without dietitian online coaching for adults with overweight and obesity: randomized controlled trial. J Med Internet Res. 2020;22(11):e17494. [FREE Full text] [CrossRef] [Medline]
- Carolan S, Harris PR, Greenwood K, Cavanagh K. Increasing engagement with an occupational digital stress management program through the use of an online facilitated discussion group: results of a pilot randomised controlled trial. Internet Interv. 2017;10:1-11. [FREE Full text] [CrossRef] [Medline]
- Gilbody S, Brabyn S, Lovell K, Kessler D, Devlin T, Smith L, et al. Telephone-supported computerised cognitive-behavioural therapy: REEACT-2 large-scale pragmatic randomised controlled trial. Br J Psychiatry. 2017;210(5):362-367. [FREE Full text] [CrossRef] [Medline]
- Lim MH, Rodebaugh TL, Eres R, Long KM, Penn DL, Gleeson JFM. A pilot digital intervention targeting loneliness in youth mental health. Front Psychiatry. 2019;10:604. [FREE Full text] [CrossRef] [Medline]
- Linardon J, Messer M, Shatte A, Greenwood CJ, Rosato J, Rathgen A, et al. Does the method of content delivery matter? Randomized controlled comparison of an internet-based intervention for eating disorder symptoms with and without interactive functionality. Behav Ther. 2022;53(3):508-520. [CrossRef] [Medline]
- Perski O, Crane D, Beard E, Brown J. Does the addition of a supportive chatbot promote user engagement with a smoking cessation app? An experimental study. Digit Health. 2019;5:1-13. [FREE Full text] [CrossRef] [Medline]
- Renfrew ME, Morton DP, Morton JK, Hinze JS, Beamish PJ, Przybylko G, et al. A web- and mobile app-based mental health promotion intervention comparing email, short message service, and videoconferencing support for a healthy cohort: randomized comparative study. J Med Internet Res. 2020;22(1):e15592. [FREE Full text] [CrossRef] [Medline]
- Steinberg DM, Kay MC, Svetkey LP, Askew S, Christy J, Burroughs J, et al. Feasibility of a digital health intervention to improve diet quality among women with high blood pressure: randomized controlled feasibility trial. JMIR Mhealth Uhealth. 2020;8(12):e17536. [FREE Full text] [CrossRef] [Medline]
- Taylor H, Cavanagh K, Field AP, Strauss C. Health care workers' need for headspace: findings from a multisite definitive randomized controlled trial of an unguided digital mindfulness-based self-help app to reduce healthcare worker stress. JMIR Mhealth Uhealth. 2022;10(8):e31744. [FREE Full text] [CrossRef] [Medline]
- Vidmar AP, Salvy SJ, Wee CP, Pretlow R, Fox DS, Yee JK, et al. An addiction-based digital weight loss intervention: a multi-centre randomized controlled trial. Pediatr Obes. 2023;18(3):e12990. [FREE Full text] [CrossRef] [Medline]
- Bidargaddi N, Pituch T, Maaieh H, Short C, Strecher V. Predicting which type of push notification content motivates users to engage in a self-monitoring app. Prev Med Rep. 2018;11:267-273. [FREE Full text] [CrossRef] [Medline]
- Graham AL, Papandonatos GD, Jacobs MA, Amato MS, Cha S, Cohn AM, et al. Optimizing text messages to promote engagement with internet smoking cessation treatment: results from a factorial screening experiment. J Med Internet Res. 2020;22(4):e17734. [FREE Full text] [CrossRef] [Medline]
- Materia FT, Smyth JM. Acceptability of intervention design factors in mHealth intervention research: experimental factorial study. JMIR Mhealth Uhealth. 2021;9(7):e23303. [FREE Full text] [CrossRef] [Medline]
- Palermo TM, de la Vega R, Murray C, Law E, Zhou C. A digital health psychological intervention (WebMAP Mobile) for children and adolescents with chronic pain: results of a hybrid effectiveness-implementation stepped-wedge cluster randomized trial. Pain. 2020;161(12):2763-2774. [FREE Full text] [CrossRef] [Medline]
- Tombor I, Beard E, Brown J, Shahab L, Michie S, West R. Randomized factorial experiment of components of the SmokeFree baby smartphone application to aid smoking cessation in pregnancy. Transl Behav Med. 2019;9(4):583-593. [FREE Full text] [CrossRef] [Medline]
- Baretta D, Amrein MA, Bäder C, Ruschetti GG, Rüttimann C, Del Rio Carral M, et al. Promoting hand hygiene during the COVID-19 pandemic: parallel randomized trial for the optimization of the Soapp app. JMIR Mhealth Uhealth. 2023;11:e43241. [FREE Full text] [CrossRef] [Medline]
- Yardley L, Morrison L, Bradbury K, Muller I. The person-based approach to intervention development: application to digital health-related behavior change interventions. J Med Internet Res. 2015;17(1):e30. [FREE Full text] [CrossRef] [Medline]
- Gumley AI, Bradstreet S, Ainsworth J, Allan S, Alvarez-Jimenez M, Birchwood M, et al. Digital smartphone intervention to recognise and manage early warning signs in schizophrenia to prevent relapse: the EMPOWER feasibility cluster RCT. Health Technol Assess. 2022;26(27):1-174. [FREE Full text] [CrossRef] [Medline]
- Browne J, Halverson TF, Vilardaga R. Engagement with a digital therapeutic for smoking cessation designed for persons with psychiatric illness fully mediates smoking outcomes in a pilot randomized controlled trial. Transl Behav Med. 2021;11(9):1717-1725. [FREE Full text] [CrossRef] [Medline]
- Felder JN, Epel ES, Neuhaus J, Krystal AD, Prather AA. Randomized controlled trial of digital cognitive behavior therapy for prenatal insomnia symptoms: effects on postpartum insomnia and mental health. Sleep. 2022;45(2):zsab280. [FREE Full text] [CrossRef] [Medline]
- Fitzsimmons-Craft EE, Taylor CB, Graham AK, Sadeh-Sharvit S, Balantekin KN, Eichen DM, et al. Effectiveness of a digital cognitive behavior therapy-guided self-help intervention for eating disorders in college women: a cluster randomized clinical trial. JAMA Netw Open. 2020;3(8):e2015633. [FREE Full text] [CrossRef] [Medline]
- Aboujaoude E, Gega L, Parish MB, Hilty DM. Editorial: digital interventions in mental health: current status and future directions. Front Psychiatry. 2020;11:111. [FREE Full text] [CrossRef] [Medline]
Abbreviations
AD: atopic dermatitis |
ADHD: attention-deficit/hyperactivity disorder |
BFNE: brief fear of negative evaluation |
CBT: cognitive behavioral theory |
DLQI: Dermatology Life Quality Index |
DTx: digital therapeutics |
EASI: eczema area and severity index |
ISI: insomnia severity index |
MOST: multiphase optimization strategy |
POEM: patient-oriented eczema measure |
PSS: perceived stress scale |
RCT: randomized controlled trial |
TCS: threshold of clinical significance |
Edited by A Mavragani; submitted 25.07.23; peer-reviewed by L Lönndahl, S Cha; comments to author 17.11.23; revised version received 08.12.23; accepted 22.12.23; published 09.02.24.
Copyright©Hyerim Lee, Eung Ho Choi, Jung U Shin, Tae-Gyun Kim, Jooyoung Oh, Bokyoung Shin, Jung Yeon Sim, Jaeyong Shin, Meelim Kim. Originally published in JMIR Formative Research (https://formative.jmir.org), 09.02.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.