Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 11.11.20 in Vol 4, No 11 (2020): November

Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/20167, first published May 20, 2020.

This paper is in the following e-collection/theme issue:

    Original Paper

    Assessing the Efficacy and Acceptability of a Web-Based Intervention for Resilience Among College Students: Pilot Randomized Controlled Trial

    1E-mental Health Research Group, School of Psychology, Trinity College Dublin, The University of Dublin, Dublin, Ireland

    2Clinical Research & Innovation, SilverCloud Health, Dublin, Ireland

    3Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise, Singapore-ETH Centre, Singapore, Singapore

    Corresponding Author:

    Angel Enrique Roig, PhD

    E-mental Health Research Group

    School of Psychology

    Trinity College Dublin, The University of Dublin

    College Green

    Dublin,

    Ireland

    Phone: 353 1 896 1000

    Email: enriquea@tcd.ie


    ABSTRACT

    Background: College students are at elevated risk for developing mental health problems and face specific barriers around accessing evidence-based treatment. Web-based interventions that focus on mental health promotion and strengthening resilience represent one possible solution. Providing support to users has shown to reduce dropout in these interventions. Further research is needed to assess the efficacy and acceptability of these interventions and explore the viability of automating support.

    Objective: This study investigated the feasibility of a new web-based resilience program based on positive psychology, provided with human or automated support, in a sample of college students.

    Methods: A 3-armed closed pilot randomized controlled trial design was used. Participants were randomized to the intervention with human support (n=29), intervention with automated support (n=26), or waiting list (n=28) group. Primary outcomes were resilience and well-being, respectively measured by the Connor–Davidson Resilience Scale and Pemberton Happiness Index. Secondary outcomes included measures of depression and anxiety, self-esteem, and stress. Outcomes were self-assessed through online questionnaires. Intention-to-treat and per-protocol analyses were conducted.

    Results: All participants demonstrated significant improvements in resilience and related outcomes, including an unexpected improvement in the waiting list group. Within- and between-group effect sizes ranged from small to moderate and within-group effects were typically larger for the human than automated support group. A total of 36 participants began the program and completed 46.46% of it on average. Participants were generally satisfied with the program and found it easy to use.

    Conclusions: Findings support the feasibility of the intervention. Preliminary evidence for the equal benefit of human and automated support needs to be supported by further research with a larger sample. Results of this study will inform the development of a full-scale trial, from which stronger conclusions may be drawn.

    Trial Registration: International Standard Randomized Controlled Trial Number (ISRCTN) 11866034; http://www.isrctn.com/ISRCTN11866034

    International Registered Report Identifier (IRRID): RR2-10.1016/j.invent.2019.100254

    JMIR Form Res 2020;4(11):e20167

    doi:10.2196/20167

    KEYWORDS



    Introduction

    The transition to university represents a period of increased academic and social pressure, financial burden, and change in lifestyle for students, placing them at increased risk for developing mental health problems [1]. The 12-month prevalence rate for mental disorders among college students is an estimated 20% yet only a small proportion of these students receive adequate treatment [2]. Given exposure to new sources of stress, the transitional nature of higher education settings affords a unique and timely opportunity to develop in students the skills needed to cope with new challenges [3]. In line with an increasing emphasis on promotion and prevention in mental health care [4], an approach that builds resilience against these stressors and prevents the development of mental disorders in the first instance may prove preferable to treatment following their onset [5]. Resilience may be understood as the personal assets (internal factors, eg, optimism) and environmental resources (external factors, eg, social support) that contribute to positive psychological adaption, despite exposure to adversity [6]. Resilience has been shown to buffer the effects of stress and burnout and protect against the development of depression, anxiety, and other common mental health problems [7,8].

    Resilience interventions seek to promote resilience at an individual, group, or population level with the aim of preparing individuals for the occurrence of future life stressors [9]. This usually takes place through the enhancement of one or more resilience factors (assets and resources) [6]. However, the guiding theoretical framework and related techniques used in these interventions varies (eg, positive psychology, acceptance and commitment therapy, mindfulness, interpersonal therapy, and cognitive behavioral therapy), with no single accepted approach [9,10]. Given their inherent focus on promoting positive adaption and well-being, interventions based on positive psychology are highly compatible within the area of mental health promotion [11]. Several meta-analyses on positive psychology interventions have demonstrated significant improvements in well-being with small to moderate effect sizes [12,13]. For resilience interventions specifically, similar effect sizes have been observed in adults for resilience outcomes [9,14,15]. Encouragingly, initial research on resilience interventions among college students has demonstrated significant improvements in resilience and reductions in stress and symptoms of depression and anxiety [16-18].

    An important consideration with the implementation of preventive interventions is ensuring that they can be accessed as widely as possible [11]. The internet is increasingly being used to deliver and improve the availability of interventions for mental health and well-being [19-22]. They may also prove particularly advantageous with students who are heavily immersed in the digital age [5]. Encouragingly, research investigating technology-delivered preventive interventions with college students has demonstrated small to moderate effect sizes for mental health outcomes [19,20] and preliminary support exists for the efficacy of web-based resilience interventions [23]. However, overall, research on web-based resilience interventions is scarce, particularly in youth samples, suggesting that further trials investigating their feasibility are necessary [14].

    Despite the advantages of internet-delivery, one of the greatest limitations of web-based interventions are the high rates of dropout that can occur in these interventions [24]. Several meta-analyses show that supported web-based interventions yield lower rates of dropout and better clinical outcomes than unguided interventions [21,22,25,26]. Support may involve clarifying program content, monitoring progress, or motivating users either online or by phone or email [27]. Notably, the type of supporter, therapist or otherwise, has no bearing on effects [28]. This suggests that the greatest benefit resides in providing some form of contact, and opening the possibility of automating this contact [27]. Automated support is differentiated from human support in the related literature in that it is provided automatically through information and communication technologies (eg, automated reminders or feedback) [29]. While initial findings show that automated support is associated with slightly poorer rates of treatment efficacy and adherence compared to human support [28,30], continued improvements in developing high-quality automated support can lead to comparable outcomes, while reducing therapist time and implementation costs [27,29]. Preference for one type of intervention over another (eg, cognitive behavioral versus interpersonal therapy or individual versus group therapy) also has the potential to influence outcomes [31]. For example, increasing evidence shows significant improvements in clinical outcomes and higher rates of treatment adherence and satisfaction when participant preference for and allocation to an intervention group is matched [31,32].

    Supported web-based resilience interventions are a promising, cost-effective approach for promoting resilience and preventing the onset of mental health problems. These interventions address issues relating to the accessibility of mental health care and may be of particular benefit to at-risk populations such as college students. This study sought to investigate the preliminary efficacy and acceptability of a new web-based intervention for resilience and well-being, based on positive psychology, in a sample of college students. The study also aimed to determine the effects of different types of support (human and automated) in the intervention on a range of outcomes including resilience, well-being, depressive and anxiety symptoms, self-esteem, and perceived stress.


    Methods

    Study Setting

    The study was conducted in Trinity College Dublin (TCD) the University of Dublin, Ireland, in collaboration with the TCD Student Counselling Service and SilverCloud Health. TCD is a university for all of the major disciplines in the arts and humanities, and in business, law, engineering, science, and health sciences. The study was advertised to all registered students via email, posters, and social media. Students considering participating in the study were invited to visit an online platform through a URL where they received information about the study. Consent was obtained via digital signature. Recruitment started in February 2019 and the trial ran for 4 consecutive months, until June 2019 when data collection was completed.

    Research Design

    A 3-armed, parallel-group, pilot randomized controlled trial design was used. Using an allocation ratio of 1:1:1, participants were randomized to the intervention with human support, intervention with automated support, or waiting list control group. The randomization schedule was generated by an individual independent of the study via sequentially numbered, opaque sealed envelopes using Random Allocation Software [33]. Randomization was performed in blocks of 12 with 3 groups. Given the nature of the trial, participants and researchers were not blinded to group allocation. The CONSORT-EHEALTH guidelines [34] (Multimedia Appendix 1) were followed and the study protocol has been published [35].

    Sample Size

    Previous data on effect sizes for web-based resilience interventions in college students do not exist [5]. However, findings from 2 meta-analyses on resilience and positive psychology interventions demonstrated small effect sizes for resilience and psychological well-being, respectively [12,14]. A sample size of 25 per arm is recommended for pilot trials when effect sizes are expected to be small [36]. This calculation is based on a main trial designed with 90% power and 2-sided 5% significance. Given a 3-armed pilot trial design and small anticipated effect size for resilience and well-being, a sample size of 75 (25 participants per arm) was determined.

    Eligibility Criteria

    Inclusion criteria were being over the age of 18 and a registered student at TCD. Exclusion criteria were currently attending counselling or psychotherapy, having an organic mental health condition, or being at risk of suicide.

    Intervention

    Space for Resilience is a 7-module program aimed at promoting resilience and well-being through the enhancement of several well-evidenced resilience factors [6]. The program was developed by SilverCloud Health in line with the principles of positive psychology [37] and incorporates cognitive behavioral elements including cognitive flexibility, optimism, challenging negative self-talk, behavioral activation, and active coping, alongside information on social support, lifestyle factors, and values. Modules are structured in an identical way and include introductory videos, quizzes, psychoeducational content, personal stories from other users, interactive activities, mindfulness exercises, homework suggestions, goal setting, and summaries. A description of module content is provided in Multimedia Appendix 2 [6,8,37-46] and a screenshot of the program is provided in Multimedia Appendix 3. The program was offered over an 8-week intervention period and was accessible 24/7. It was recommended that participants spend at least an hour a week on the program based on previous studies with the same platform [47,48].

    Support

    Human

    Participants in the human support group were assigned to a supporter from the TCD Student Counselling Service. Supporters were counsellors or trainee counsellors familiar with using the SilverCloud Health platform and received training in the Space for Resilience program. The role of the supporter was to monitor and support user progress through the program. On the supporter interface of the platform, an overview of each users’ level of engagement with the program is presented. This includes user responses on questionnaires, messages left by the user, module pages viewed, tools and activities used (including content shared by the user), and the number of times the user logged in to the platform. Using this information, supporters spent 10-15 minutes formulating individualized reviews for each participant. Reviews are asynchronous messages sent and received on the platform. Supporters received guidelines on how to support users. These guidelines advise that in every review, the supporter should (1) demonstrate empathy and care to the user, (2) demonstrate knowledge of the theory underlying the program, (3) acknowledge and affirm the user’s progress, (4) prompt and encourage further use of the program, (5) ask reflective questions, and (6) set homework. Participants received 4 reviews during the intervention period. An excerpt from a sample review is provided below:

    Well done for logging in to the Space for Resilience programme again this week. I can see you completed the second module, Self, which supports you in identifying your values, passions, and what matters most to you in life. Did any questions come up for you during this module?
    I noticed from one of the tools you filled in that building your social network is something you would like to focus on. The Connections module might be particularly helpful for this. It includes useful information on developing relationships and building communities as well as tips for improving communication skills like active listening and expressing gratitude.
    Remember, applying the skills you lean in this module to your everyday life is like building up a muscle. You might not see the reward straight away but the more time you spend on it, the more your social network will grow and the stronger your communication skills will become.
    Automated

    Participants in the automated support group received generic, templated reviews which were automatically sent as messages on the platform. Automated reviews were designed to facilitate user progress through the program and were structured in the same way as reviews in the human support group (eg, users are encouraged to explore new content in the program). However, reviews in the automated support group were standardized as opposed to individualized. They were therefore not tailored to each user’s unique level of engagement with the program. Automated reviews were predeveloped by highly experienced clinicians with in-depth knowledge of providing support for web-based interventions. Participants received 4 reviews during the intervention period. An excerpt from an automated review is provided below:

    Have you been finding the programme useful so far? No matter how much time you have spent exploring the programme since your last review, we wanted to remind you that even a small effort can make a big difference.
    You can complete the modules in whatever order suits you best. Over the next two weeks, we suggest that you work through one or two more of the five domains of resilience modules: purpose, self, connections, body, or mind.
    Remember that this programme is designed to help you, but it is up to you to make the changes. Do what you can, one step at a time. 

    Measures

    All outcomes were self-assessed through online questionnaires. See Table 1 for measure administration timeline.

    Table 1. Measure administration timeline.
    View this table
    Screening Measure: The Sociodemographic and Clinical History Questionnaire

    The Sociodemographic and Clinical History Questionnaire [49] collects sociodemographic information including age, gender, education level, and computer literacy; and clinical information including current engagement with counselling or psychotherapy, drug and alcohol use, diagnosis of an organic mental health condition, and suicide risk. While group assignment was random, this questionnaire included an item asking participants if they would prefer to receive human or automated support and why.

    Primary Outcome Measures
    Connor–Davidson Resilience Scale

    The Connor–Davidson Resilience Scale (CD-RISC) [50] is a 25-item self-report measure of resilience or ability to cope with stress. The CD-RISC has shown good concurrent validity and internal consistency (α=.89) with college students [51].

    Pemberton Happiness Index

    The Pemberton Happiness Index (PHI) [52] is a 21-item self-report integrative measure of well-being. Of these items, 11 relate to remembered well-being (ie, general, hedonic, eudaimonic, and social well-being) and 10 relate to experienced well-being (ie, positive and negative events that happened the previous day). The PHI has demonstrated good convergent and incremental validity and strong internal consistency (α>.89) [52].

    Secondary Outcome Measures
    Brief Resilience Scale

    The Brief Resilience Scale (BRS) [53] is a 6-item self-report measure assessing resilience or ability to bounce back or recover from stress. The BRS has shown strong convergent validity and good internal consistency (α>.80) with college students [53].

    Patient Health Questionnaire—4 Items

    The Patient Health Questionnaire—4 Items (PHQ-4) [54] is a brief self-report measure of depression and anxiety. The PHQ-4 has demonstrated good construct and criterion validity and internal consistency (α=.81) with college students [55].

    Rosenberg Self-Esteem Scale

    The Rosenberg Self-Esteem Scale (RSES) [56] is a 10-item self-report measure of global self-esteem. The RSES has shown good construct validity, internal consistency (α=.87), and test–retest reliability (r=.84) with college students [57].

    Perceived Stress Scale—4 Items

    The Perceived Stress Scale—4 Items (PSS-4) [58] is a brief self-report measure of the extent to which recent life events are considered stressful. The PSS-4 has demonstrated acceptable criterion validity and internal consistency (α=.72) [58].

    Other Measures
    Platform Usage Metrics

    Usage refers to the degree to which participants were exposed to the intervention [59]. Related data for active intervention groups were collected automatically on the online platform. This included number of logins to the program, length of time spent using the program, number of tools and activities used, and percentage of program content viewed. A session was defined as any instance where a participant logged in to the platform. Number of sessions was therefore determined by the total number of participant logins.

    Satisfaction With Treatment

    The Satisfaction With Treatment (SAT) [60] is an 8-item self-report measure of attitudes toward the web-based intervention. It also includes 2 open-ended questions asking participants what they most and least liked about the intervention.

    Procedure

    After providing informed consent, participants completed baseline measures. Participants meeting eligibility criteria were randomized to the human support, automated support, or waiting list group and were informed of group assignment immediately. Active intervention groups were given immediate access to the program. The waiting list group was given access to the program after an 8-week waiting period. To minimize dropout, participants received a phone call from a member of the research team (CTL and SF) approximately 1 week following randomization to remind them of group assignment and research procedures. After the 8-week period, participants received an email asking them to complete postintervention measures. Participants were informed of institutional affiliations during the informed consent procedure and were not reimbursed for their participation in the trial.

    Data Analysis

    Data were analyzed using SPSS software (version 24) [61]. Recruitment and retention rates were examined using descriptive statistics and a Pearson chi-square test. Sociodemographic information and baseline data were examined using descriptive statistics, Pearson chi-square tests, and one-way analysis of variance (ANOVAs). Reliability checks using Cronbach α were conducted on outcome measures.

    Intention-to-treat (ITT) and per-protocol analyses were conducted on primary and secondary outcomes measures. Per-protocol analysis considered all participants who completed baseline and postintervention outcome measures and, in the case of active intervention groups, accessed the program at least once. For ITT analysis, missing data were calculated using the expectation-maximization algorithm, a maximum likelihood method used in similar trials [62]. Preliminary efficacy was evaluated using mixed factorial ANOVAs. Within- and between-group effect sizes (Cohen d) and 95% confidence intervals were calculated for each group. Effect sizes of 0.2 were considered small, 0.5 were considered medium, and 0.8 were considered large [63]. The use of ANOVAs represents a revision to study protocol which outlined the use of linear mixed models in the analysis plan [35]. However, diagnostic tests on the data revealed inadequate power and model fit to sufficiently address the research questions under investigation. Mixed factorial ANOVAs were therefore deemed more suitable. There was a modest departure from the assumption of homogeneity of variance; however, the F-test has shown to be robust against moderate departures and variance heterogeneity is frequently observed in real-world data [64].

    Usage data were analyzed using descriptive statistics, Pearson chi-square tests, and unpaired t tests. Data from the SAT were analyzed using descriptive statistics, unpaired t tests, and descriptive and interpretive analysis [65]. Descriptive and interpretive analysis is an integrative approach to analyzing qualitative data that aims to identify and analyze patterns in the data by delineating meaning units and organizing them into categories. Between-group effect sizes (Cohen d) and 95% confidence intervals were also calculated for usage and SAT data.

    To explore the effects of intervention preference and allocation, exploratory subgroup analyses were conducted. Participants in the active intervention groups were divided into 2 groups: those who were allocated to their preferred intervention group and those who were not. Pearson chi-square tests, ANOVAs, and unpaired t tests were used to examine differences in outcomes, engagement and usage, and satisfaction with the intervention.

    Ethical Considerations

    The study received full ethical approval from the TCD School of Psychology Research Ethics Committee on January 29, 2019 (approval ID: SPREC112018-12). Ethical considerations are fully outlined in the study protocol [35].

    Data Sharing

    Data will be made available upon request to the corresponding author.


    Results

    Recruitment and Retention

    Out of the estimated 17,000 students invited, 139 (0.82%) students signed up to participate and were assessed for eligibility. A total of 83 (59.7%) participants met eligibility criteria and were included in the trial. Of these, 63 (76%) completed postintervention measures. The dropout rate was therefore 24% (20/83). A Pearson chi-square test revealed no differences between groups in terms of completion of outcome measures at postintervention. Participant flow through the trial is presented in Figure 1.

    Figure 1. CONSORT flow diagram. ITT: intention to treat. aSome participants met more than 1 reason for exclusion and are categorized as such. bRefers to participants that did not complete postintervention measures. cRefers to participants who did not start the intervention.
    View this figure

    Baseline Characteristics

    The median age of participants was 26 years (IQR 11). In terms of computer literacy, most participants were either confident or very confident in using computers and the internet (73/82, 89%). Baseline characteristics of the sample are provided in Table 2. Pearson chi-square tests and one-way ANOVAs demonstrated no significant differences between groups in terms of sociodemographic variables or scores on baseline measures. Reliability checks demonstrated satisfactory internal consistency for all outcome measures (α>.70).

    Table 2. Baseline characteristics of study sample.
    View this table

    Preliminary Efficacy

    Descriptive statistics, within- and between-group effect sizes, and confidence intervals for ITT and per-protocol analyses are presented in Table 3.

    Table 3. Descriptive statistics, effect sizes, and confidence intervals for intention-to-treat and per-protocol analysesa.
    View this table
    ITT Analysis

    Mixed factorial ANOVAs demonstrated main effects of time for resilience (CD-RISC; F1,80=21.56, P<.001), well-being (PHI; F1,80=9.40, P=.003), resilience (BRS; F1,80=10.08, P=.002), depressive and anxiety symptoms (PHQ-4; F1,80=5.96, P=.02), self-esteem (RSES; F1,80= 15.18, P<.001), and perceived stress (PSS-4; F1,80=5.48, P=.02). No interaction effects or main effects of group were observed for any outcome measure. For main effects of time, mean scores show an increase in resilience, well-being, and self-esteem and decrease in depressive and anxiety symptoms and perceived stress for all participants.

    Per-Protocol Analysis

    Mixed factorial ANOVAs demonstrated main effects of time for resilience (CD-RISC; F1,56=9.16, P=.004), well-being (PHI; F1,56=4.20, P=.045), resilience (BRS; F1,56=4.26, P=.04), depressive and anxiety symptoms (PHQ-4; F1,56=5.34, P=.03), and self-esteem (RSES; F1,56=6.51, P=.01). No main effect of time was observed for perceived stress (PSS-4). No interaction effects or main effects of group were observed for any outcome measure. For main effects of time, mean scores demonstrate an increase in resilience, well-being, and self-esteem and decrease in depressive and anxiety symptoms for all participants.

    Acceptability

    Engagement and Usage

    A total of 36/55 (65%) participants in the active intervention groups started the program. A Pearson chi-square test revealed no significant difference between groups in terms of whether or not participants started the intervention. The mean number of sessions was 8.50 (SD 3.65) and average session length was 20.38 minutes (SD 8.95). On average, participants spent a total of 171.55 minutes (SD 101.36) on the program and completed 46.46% (SD 27.80) of the program. Computers were the preferred device for accessing the program (64.30% of total use), followed by mobiles (33.52% of total use) and tablets (2.18% of total use). Independent t tests demonstrated no significant differences in engagement and usage between active intervention groups. Descriptive statistics, between-group effect sizes, and confidence intervals for program engagement and usage are presented in Table 4, with effect sizes generally favoring the human support group.

    Table 4. Descriptive statistics for program engagement and usage.
    View this table
    Satisfaction With the Intervention

    A total of 34/55 (62%) participants in the active intervention groups started the program and completed the SAT. Independent t tests demonstrated no significant differences between active intervention groups in terms of scores on the SAT. Descriptive statistics, between-group effect sizes, and confidence intervals for the SAT are displayed in Table 5, with effect sizes nearly all in favor of the human support group. Both groups liked the flexibility, user-friendliness, and positive psychology approach of the program. The human support group identified liking anonymity and supporter feedback, with one participant reporting that simply “knowing there was support” [participant #17, male] was helpful. Both groups disliked the lack of face-to-face interaction. Participants also reported that the program did not fully meet their individual needs and wants. One participant noted that it “was quite vague at times” [participant #7, female] while another reported that it “felt too prescriptive in how life should be” [participant #35, female]. The human support group disliked the infrequent timing of reviews, reporting that they felt discontinuous. One participant in the automated support group reported that receiving human support may have encouraged greater use of the program. Several participants noted time restrictions and lacking motivation as barriers to program completion.

    Table 5. Descriptive statistics for the SATa.
    View this table

    Intervention Preference and Allocation

    With regard to intervention preference, 66% of participants opted for human support (55/83). Main reasons for selecting human support included the belief that human contact cannot be replaced and perceptions that it would be more personalized and beneficial than automated support. Prominent reasons for opting for automated support included a want for greater privacy and an interest in the user experience of receiving automated support.

    In terms of primary and secondary outcomes, mixed factorial ANOVAs demonstrated no significant difference between participants who were or were not assigned their preferred intervention, from baseline to post-intervention, for ITT (n=55) or per-protocol (n=35) analyses. However, a Pearson chi-square test (n=55) revealed that participants who were allocated their preferred intervention were significantly more likely to complete postintervention measures than those who were not (χ21=5.8, P=.02).

    Regarding engagement and usage, independent t tests (n=36) demonstrated a significant difference between participants who started the intervention in terms of length of program use (t33.96=3.45, P=.002) and number of logins (t34=2.15, P=.04). Participants who were allocated their preferred intervention tended to spend more time on the program (n=23; mean 205.42 [SD 106.13]) and log in more frequently (n=23; mean 9.43 [SD 3.06]) than participants who were not, who on average spent less time on the program (n=13; mean 111.64 [SD 56.82]), and logged in less frequently (n=13; mean 6.85 [SD 4.14]). Participants did not differ significantly on any other engagement and usage or satisfaction (n=34) variable.

    Additional post hoc analyses revealed that participants who elected for and received human support spent significantly longer time on the program (n=16; mean 213.72 [SD 111.70]) than participants who elected for human support and received automated support (n=9; mean 103.10 [SD 44.40]; t21.44=3.50, P=.002). Similarly participants who elected for and received automated support spend significantly longer on the program (n=7; mean 186.44 [SD 97.52]) than participants who elected for automated support and received human support (n=9; mean 103.10 [SD 44.40]; t14=2.93, P=.04).


    Discussion

    Principal Findings

    This pilot study investigated the preliminary efficacy and acceptability of a web-based intervention for resilience, provided with human or automated support, in a sample of college students. All participants demonstrated significant improvements in resilience, well-being, and self-esteem and reductions in symptoms of depression and anxiety, and perceived stress, thereby confirming the beneficial effects of the web-based resilience intervention. With regard to the role of support, the results are preliminary in nature, but results show overall equivalence of outcomes between human and automated support.

    Effect sizes were generally moderate for resilience outcomes and small for well-being outcomes, in line with existing research on resilience and positive psychology interventions [9,12-18]. Similarly, for secondary outcomes of self-esteem, depression and anxiety symptoms, and perceived stress, effects ranged from small to moderate. Notably, effects for resilience tended to be larger on the BRS (which measures a resilient outcome) in the human support group and larger on the CD-RISC (which measures the assets and resources that lead to a resilient outcome) in the automated support group [66]. It is possible that the personalized element of human support facilitated the application of skills targeted by the intervention to participants’ specific life circumstances, increasing the likelihood of a resilient outcome. Comparably, while the automated support group likely developed these skills, they perhaps lacked the tailored support conducive to applying them, limiting the opportunity for a resilient outcome. However, these results are preliminary and a larger-scale trial is needed to confirm the direction of the findings.

    The general equivalence of outcomes across the 2 active intervention arms is in contrast with research demonstrating more favorable outcomes when human support is provided [28,30]. However, this does compare to some preliminary evidence of comparable outcomes between human and automated support [27,29]. This may have been due to a greater sense of agency in the automated support group as participants were not dependent on a therapist and the quality of automated support [27,29]. Nonetheless, effect sizes tended to be larger for the human support group. This may have been due to the personalization of feedback in this group. Therefore, the addition of persuasive technology features such as tailoring or personalization to automated support may bring it up to par with human support in terms of effect [27].

    Even more interesting was our finding that observed effects were likely impacted by user preference, demonstrating that those who opted for the human or automated supported intervention and received that had higher engagement. While intervention preference and allocation had no effect on intervention outcomes or satisfaction, participants who received their preferred intervention did use the program more and were more likely to complete postintervention measures. These findings support research showing higher levels of treatment adherence and retention when preference and allocation are matched [31,32]. This may point to the clinical utility of a shared model of decision making when more than 1 intervention option is available [31].

    While our findings are, for the most part, in line with existing research, a nonsignificant difference between active intervention groups and controls is something that we did not expect. It is important to note that the effects observed in the active intervention groups cannot be attributed to the intervention with certainty, given significant improvements and comparable effect sizes in the waiting list group. This amelioration may have been due to a self-selection bias in this study whereby students who were more motivated to change signed up to participate [67]. Accordingly, anticipation effects, that is, changes in outcome due to expectation of future change, may have been present in the waiting list group [68]. As there was no follow-up, it cannot be determined if these effects dissipated over time or if effects in the active intervention groups were sustained. Lastly, given that the study was a pilot, sample size was small. Because of greater variability in participant responses, it is likely that the sample was not large enough to differentiate between groups [69].

    An initial recruitment rate of less than 1% may be partially attributed to the fact that participants were not reimbursed for participating in the study. However, retention rates in this study were high resulting in a dropout rate of only 24%. This is impressive relative to other web-based intervention research which has demonstrated dropout rates of up to 83% [24]. In terms of program usage, 65% (36/55) of participants in the active intervention groups started the program and completed 46.46% of it on average. It is therefore possible that participants did not receive the full benefit of the intervention in this study, underestimating its true efficacy and further contributing to the nonsignificant difference between intervention and control groups. Potential reasons for low levels of engagement or dropout or both include the lack of time students had to spend on the program and insufficient support, which may have influenced motivation to use the intervention. Both of these were noted in the satisfaction data collected. Given its significant effect on usage, it is also viable that dropout rates were impacted by user preference and allocation to intervention groups.

    Overall, participants were satisfied with the intervention, found it helpful and easy to use, and would recommend it to others. Participant satisfaction with the intervention did not vary based on the type of support provided, further indicating the equivalence of human and automated support. In line with previous research on web-based interventions [70], participants liked the flexibility and accessibility of the program and disliked not having enough time to complete it. As the human support group disliked the infrequent timing and discontinuation of reviews, it raises the question as to the role of support in such interventions; perhaps as has been suggested previously, there may be different routes to treatment success that is dependent on user characteristics and type of support required [71]. Knowledge regarding the amount of support necessary in preventive web-based interventions is currently unclear [17]. The decision to provide fortnightly reviews in this study was based on the fact that participants were not drawn from a clinical population and were deemed capable of completing the program with minimal support. However, in order to sufficiently motivate and support user engagement, it is possible that the same degree of support is necessary as in remedial programs, where support is typically implemented on a weekly basis [21,72].

    Limitations

    The main limitation of the study was the small sample size. This resulted in greater heterogeneity in the data, leading to an unforeseen change to study protocol in terms of data analysis. However, it should be acknowledged that as the study is a pilot, establishing the true efficacy of the intervention was not the primary goal. Second, as data on time spent on the program per week were not collected, it was not possible to determine if participants adhered to the recommended dose of usage [47,48]. Further, reasons for participants not signing up to the intervention and dropout were not collected, limiting related insights around recruitment and attrition. As there was no follow-up assessment, the long-term effects of the intervention could not be gauged. Besides, the study did not examine the occurrence of adverse events following the intervention; change in resilience was based on self-report scales. Therefore, it cannot be determined whether or not participants had the opportunity to apply the skills acquired through the intervention by the time of postintervention assessment. Additionally, the sample included postgraduate students who may not be representative of a high-risk student population given that they are likely to be more well-adjusted than undergraduate students [73].

    Implications and Future Research

    To the authors’ best knowledge, this is the first study to explore the role of human and automated support in a resilience intervention in a college sample. Primarily, the results of this pilot will inform the development and implementation of a full-scale trial. It is possible that more reviews should be provided to groups to increase engagement, with final reviews preparing the human support group for the discontinuation of support. As telephone calls and emails to participants from the research team constituted reminders about research procedures, their omission is not anticipated to affect outcomes or usage in routine application. Preliminary evidence for the equivalence of human and automated support must be replicated before related conclusions are drawn. Future research should further consider the effects of participant preference for support and the role of personalization in automated support, establish recommendations around intervention dose, and include follow-up assessment(s). Applications may then be considered, including the widespread implementation of the intervention at a universal level and for at-risk populations.

    Conclusion

    Web-based interventions aimed at promoting resilience demonstrate an important protective function in mitigating the effects of stress. They have the potential to reduce the occurrence of mental health problems in those who are at heightened risk and experience difficulties around accessing adequate treatment. Beyond prevention, an emphasis on resilience reflects a larger shift in focus away from pathology and toward psychological well-being and human strengths. These interventions therefore play an important role in the area of mental health promotion in terms of increasing not only the emphasis placed on successful versus stressful life events, but also their prevalence.

    Acknowledgments

    Thank you to Orla McLoughlin and the TCD Student Counselling Service for hosting the study and to all students that participated. The study was funded by SilverCloud Health and the TCD Student Counselling Service.

    Authors' Contributions

    AE and OM designed the study with DR. OM, AS, CTL, and SF coordinated data collection. AE and OM led data analysis and interpretation under the supervision of DR. The first draft of the manuscript was written by OM and revised by AE and DR who approved the final version.

    Conflicts of Interest

    AE, OM, AS, CTL, SF, and DR were employed by SilverCloud Health during the completion of the study. AE and DR are members of the TCD E-mental Health Research Group.

    Multimedia Appendix 1

    CONSORT-EHEALTH checklist.

    PDF File (Adobe PDF File), 358 KB

    Multimedia Appendix 2

    Space for Resilience: Description of module content.

    DOCX File , 14 KB

    Multimedia Appendix 3

    Space for Resilience: Screenshot of the intervention.

    PNG File , 30 KB

    References

    1. Aldiabat KM, Matani NA, Le Navenec CL. Mental Health among Undergraduate University Students: A Background Paper for Administrators, Educators and Healthcare Providers. Universal Journal of Public Health 2014;2(8):209-214. [CrossRef] [Medline]
    2. Auerbach RP, Alonso J, Axinn WG, Cuijpers P, Ebert DD, Green JG, et al. Mental disorders among college students in the World Health Organization World Mental Health Surveys. Psychol Med 2016 Oct;46(14):2955-2970 [FREE Full text] [CrossRef] [Medline]
    3. Conley CS, Durlak JA, Dickson DA. An evaluative review of outcome research on universal mental health promotion and prevention programs for higher education students. J Am Coll Health 2013;61(5):286-301. [CrossRef] [Medline]
    4. World Health Organization. The European Mental Health Action Plan 2013–2020. Copenhagen: Regional Office for Europe; 2015.   URL: http:/​/www.​euro.who.int/​__data/​assets/​pdf_file/​0020/​280604/​WHO-Europe-Mental-Health-Acion-Plan-2013-2020.​pdf [accessed 2020-03-27]
    5. Herrero R, Mira A, Corno G, Etchemendy E, Baños R, García-Palacios A, et al. An Internet based intervention for improving resilience and coping strategies in university students: Study protocol for a randomized controlled trial. Internet Interv 2019 Apr;16:43-51 [FREE Full text] [CrossRef] [Medline]
    6. Helmreich I, Kunzler A, Chmitorz A, König J, Binder H, Wessa M, et al. Psychological interventions for resilience enhancement in adults. Cochrane Database Syst Rev 2017 Feb;59:CD012527 [FREE Full text] [CrossRef]
    7. Hao S, Hong W, Xu H, Zhou L, Xie Z. Relationship between resilience, stress and burnout among civil servants in Beijing, China: Mediating and moderating effect analysis. Personality and Individual Differences 2015 Sep;83:65-71. [CrossRef]
    8. Lee JH, Nam SK, Kim AR, Kim B, Lee MY, Lee SM. Resilience: a meta‐analytic approach. J Couns Dev 2013 Jul;91(3):269-279. [CrossRef] [Medline]
    9. Leppin AL, Bora PR, Tilburt JC, Gionfriddo MR, Zeballos-Palacios C, Dulohery MM, et al. The efficacy of resiliency training programs: a systematic review and meta-analysis of randomized trials. PLoS One 2014;9(10):e111420 [FREE Full text] [CrossRef] [Medline]
    10. Macedo T, Wilheim L, Gonçalves R, Coutinho ESF, Vilete L, Figueira I, et al. Building resilience for future adversity: a systematic review of interventions in non-clinical samples of adults. BMC Psychiatry 2014 Aug 14;14:227 [FREE Full text] [CrossRef] [Medline]
    11. Baños RM, Etchemendy E, Mira A, Riva G, Gaggioli A, Botella C. Online Positive Interventions to Promote Well-being and Resilience in the Adolescent Population: A Narrative Review. Front Psychiatry 2017;8:10 [FREE Full text] [CrossRef] [Medline]
    12. Bolier L, Haverman M, Westerhof GJ, Riper H, Smit F, Bohlmeijer E. Positive psychology interventions: a meta-analysis of randomized controlled studies. BMC Public Health 2013;13:119 [FREE Full text] [CrossRef] [Medline]
    13. Chakhssi F, Kraiss JT, Sommers-Spijkerman M, Bohlmeijer ET. The effect of positive psychology interventions on well-being and distress in clinical samples with psychiatric or somatic disorders: a systematic review and meta-analysis. BMC Psychiatry 2018 Jun 27;18(1):211 [FREE Full text] [CrossRef] [Medline]
    14. Joyce S, Shand F, Tighe J, Laurent SJ, Bryant RA, Harvey SB. Road to resilience: a systematic review and meta-analysis of resilience training programmes and interventions. BMJ Open 2018 Jun 14;8(6):e017858 [FREE Full text] [CrossRef] [Medline]
    15. Vanhove AJ, Herian MN, Perez ALU, Harms PD, Lester PB. Can resilience be developed at work? A meta-analytic review of resilience-building programme effectiveness. J Occup Organ Psychol 2015 Apr 25;89(2):278-307. [CrossRef]
    16. Houston JB, First J, Spialek ML, Sorenson ME, Mills-Sandoval T, Lockett M, et al. Randomized controlled trial of the Resilience and Coping Intervention (RCI) with undergraduate university students. J Am Coll Health 2017 Jan;65(1):1-9. [CrossRef] [Medline]
    17. Peng L, Li M, Zuo X, Miao Y, Chen L, Yu Y, et al. Application of the Pennsylvania resilience training program on medical students. Personality and Individual Differences 2014 Apr;61-62:47-51. [CrossRef]
    18. Steinhardt M, Dolbier C. Evaluation of a resilience intervention to enhance coping strategies and protective factors and decrease symptomatology. J Am Coll Health 2008;56(4):445-453. [CrossRef] [Medline]
    19. Conley CS, Durlak JA, Shapiro JB, Kirsch AC, Zahniser E. A Meta-Analysis of the Impact of Universal and Indicated Preventive Technology-Delivered Interventions for Higher Education Students. Prev Sci 2016 Aug;17(6):659-678. [CrossRef] [Medline]
    20. Harrer M, Adam SH, Baumeister H, Cuijpers P, Karyotaki E, Auerbach RP, et al. Internet interventions for mental health in university students: A systematic review and meta-analysis. Int J Methods Psychiatr Res 2019 Jun;28(2):e1759. [CrossRef] [Medline]
    21. Richards D, Richardson T. Computer-based psychological treatments for depression: a systematic review and meta-analysis. Clin Psychol Rev 2012 Jun;32(4):329-342. [CrossRef] [Medline]
    22. Wright JH, Owen JJ, Richards D, Eells TD, Richardson T, Brown GK, et al. Computer-Assisted Cognitive-Behavior Therapy for Depression: A Systematic Review and Meta-Analysis. J Clin Psychiatry 2019 Mar 19;80(2) [FREE Full text] [CrossRef] [Medline]
    23. Rose RD, Buckey JC, Zbozinek TD, Motivala SJ, Glenn DE, Cartreine JA, et al. A randomized controlled trial of a self-guided, multimedia, stress management and resilience training program. Behav Res Ther 2013 Feb;51(2):106-112. [CrossRef] [Medline]
    24. Melville KM, Casey LM, Kavanagh DJ. Dropout from Internet-based treatment for psychological disorders. Br J Clin Psychol 2010 Nov;49(Pt 4):455-471. [CrossRef] [Medline]
    25. Andersson G, Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cogn Behav Ther 2009;38(4):196-205. [CrossRef] [Medline]
    26. Barak A, Hen L, Boniel-Nissim M, Shapira N. A Comprehensive Review and a Meta-Analysis of the Effectiveness of Internet-Based Psychotherapeutic Interventions. Journal of Technology in Human Services 2008 Jul 03;26(2-4):109-160. [CrossRef]
    27. Kelders SM, Bohlmeijer ET, Pots WTM, van Gemert-Pijnen JE. Comparing human and automated support for depression: Fractional factorial randomized controlled trial. Behav Res Ther 2015 Sep;72:72-80. [CrossRef] [Medline]
    28. Titov N, Andrews G, Choi I, Schwencke G, Johnston L. Randomized controlled trial of web-based treatment of social phobia without clinician guidance. Aust NZ J Psychiatry 2009 Jan;43(10):913-919. [CrossRef]
    29. Mira A, Bretón-López J, García-Palacios A, Quero S, Baños RM, Botella C. An Internet-based program for depressive symptoms using human and automated support: a randomized controlled trial. Neuropsychiatr Dis Treat 2017 Mar;Volume 13:987-1006. [CrossRef]
    30. Furmark T, Carlbring P, Hedman E, Sonnenstein A, Clevberger P, Bohman B, et al. Guided and unguided self-help for social anxiety disorder: randomised controlled trial. Br J Psychiatry 2009 Nov;195(5):440-447. [CrossRef] [Medline]
    31. Swift JK, Callahan JL. The impact of client treatment preferences on outcome: a meta-analysis. J Clin Psychol 2009 Apr;65(4):368-381. [CrossRef] [Medline]
    32. Lindhiem O, Bennett CB, Trentacosta CJ, McLear C. Client preferences affect treatment satisfaction, completion, and clinical outcome: a meta-analysis. Clin Psychol Rev 2014 Aug;34(6):506-517 [FREE Full text] [CrossRef] [Medline]
    33. Saghaei M. Random Allocation Software, version 1.0. 2004 May.   URL: http://mahmoodsaghaei.tripod.com/Softwares/randalloc.html [accessed 2020-11-02]
    34. Eysenbach G, CONSORT- E. CONSORT-EHEALTH: improving and standardizing evaluation reports of Web-based and mobile health interventions. J Med Internet Res 2011;13(4):e126 [FREE Full text] [CrossRef] [Medline]
    35. Enrique A, Mooney O, Salamanca-Sanabria A, Lee CT, Farrell S, Richards D. Assessing the efficacy and acceptability of an internet-delivered intervention for resilience among college students: A pilot randomised control trial protocol. Internet Interv 2019 Sep;17:100254 [FREE Full text] [CrossRef] [Medline]
    36. Whitehead AL, Julious SA, Cooper CL, Campbell MJ. Estimating the sample size for a pilot randomised trial to minimise the overall trial sample size for the external pilot and main trial for a continuous outcome variable. Stat Methods Med Res 2016 Jun;25(3):1057-1073 [FREE Full text] [CrossRef] [Medline]
    37. Seligman MEP. Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment. New York, NY: Simon and Schuster; 2004.
    38. Davis DM, Hayes JA. What are the benefits of mindfulness? A practice review of psychotherapy-related research. Psychotherapy (Chic) 2011 Jun;48(2):198-208. [CrossRef] [Medline]
    39. Ho MY, Cheung FM, Cheung SF. The role of meaning in life and optimism in promoting well-being. Personality and Individual Differences 2010 Apr;48(5):658-663. [CrossRef]
    40. Zessin U, Dickhäuser O, Garbade S. The Relationship Between Self-Compassion and Well-Being: A Meta-Analysis. Appl Psychol Health Well Being 2015 Nov;7(3):340-364. [CrossRef] [Medline]
    41. Afifi TD, Merrill AF, Davis S. The theory of resilience and relational load. Pers Relationship 2016 Oct 26;23(4):663-683. [CrossRef]
    42. Prati G, Pietrantoni L. Optimism, Social Support, and Coping Strategies As Factors Contributing to Posttraumatic Growth: A Meta-Analysis. Journal of Loss and Trauma 2009 Aug 26;14(5):364-388. [CrossRef]
    43. Steptoe A, Deaton A, Stone AA. Subjective wellbeing, health, and ageing. Lancet 2015 Feb 14;385(9968):640-648 [FREE Full text] [CrossRef] [Medline]
    44. Ekers D, Webster L, Van SA, Cuijpers P, Richards D, Gilbody S. Behavioural activation for depression; an update of meta-analysis of effectiveness and sub group analysis. PLoS One 2014;9(6):e100100 [FREE Full text] [CrossRef] [Medline]
    45. Wood AM, Froh JJ, Geraghty AWA. Gratitude and well-being: a review and theoretical integration. Clin Psychol Rev 2010 Nov;30(7):890-905. [CrossRef] [Medline]
    46. Lomas T, Ivtzan I. Second Wave Positive Psychology: Exploring the Positive–Negative Dialectics of Wellbeing. J Happiness Stud 2015 Aug 14;17(4):1753-1768. [CrossRef]
    47. Richards D, Duffy D, Blackburn B, Earley C, Enrique A, Palacios J, et al. BMC Psychiatry 2018 Mar 02;18(1):59 [FREE Full text] [CrossRef] [Medline]
    48. Richards D, Timulak L, Doherty G, Sharry J, Colla A, Joyce C, et al. Internet-delivered treatment: its potential as a low-intensity community intervention for adults with symptoms of depression: protocol for a randomized controlled trial. BMC Psychiatry 2014 May 21;14:147 [FREE Full text] [CrossRef] [Medline]
    49. Richards D, Timulak L, Hevey D. A comparison of two online cognitive-behavioural interventions for symptoms of depression in a student population: The role of therapist responsiveness. Counselling and Psychotherapy Research 2013 Sep;13(3):184-193. [CrossRef]
    50. Connor KM, Davidson JRT. Development of a new resilience scale: the Connor-Davidson Resilience Scale (CD-RISC). Depress Anxiety 2003;18(2):76-82. [CrossRef] [Medline]
    51. Singh K, Yu X. Psychometric Evaluation of the Connor-Davidson Resilience Scale (CD-RISC) in a Sample of Indian Students. Journal of Psychology 2017 Sep;1(1):23-30. [CrossRef]
    52. Hervás G, Vázquez C. Construction and validation of a measure of integrative well-being in seven languages: The Pemberton Happiness Index. Health Qual Life Outcomes 2013;11(1):66. [CrossRef]
    53. Smith BW, Dalen J, Wiggins K, Tooley E, Christopher P, Bernard J. The brief resilience scale: assessing the ability to bounce back. Int J Behav Med 2008;15(3):194-200. [CrossRef] [Medline]
    54. Kroenke K, Spitzer RL, Williams JBW, Löwe B. An ultra-brief screening scale for anxiety and depression: the PHQ-4. Psychosomatics 2009;50(6):613-621. [CrossRef] [Medline]
    55. Khubchandani J, Brey R, Kotecki J, Kleinfelder J, Anderson J. The Psychometric Properties of PHQ-4 Depression and Anxiety Screening Scale Among College Students. Archives of Psychiatric Nursing 2016 Aug;30(4):457-462. [CrossRef]
    56. Rosenberg M. Society and Adolescent Self-Image. Princeton, NJ: Princeton University Press; 1965.
    57. Martín-Albo J, Núñiez JL, Navarro JG, Grijalvo F. The Rosenberg Self-Esteem Scale: translation and validation in university students. Span J Psychol 2007 Nov;10(2):458-467. [Medline]
    58. Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav 1983 Dec;24(4):385-396. [CrossRef] [Medline]
    59. Enrique A, Palacios JE, Ryan H, Richards D. Exploring the Relationship Between Usage and Outcomes of an Internet-Based Intervention for Individuals With Depressive Symptoms: Secondary Analysis of Data From a Randomized Controlled Trial. J Med Internet Res 2019 Aug 01;21(8):e12775 [FREE Full text] [CrossRef] [Medline]
    60. Richards D, Timulak L. Satisfaction with therapist-delivered vs. self-administered online cognitive behavioural treatments for depression symptoms in college students. British Journal of Guidance & Counselling 2013 Apr;41(2):193-207. [CrossRef]
    61. IBM SPSS Statistics for Macintosh. Version 24. Armonk, NY: IBM Corp; 2016.
    62. Enrique A, Bretón-López J, Molinari G, Baños RM, Botella C. Efficacy of an adaptation of the Best Possible Self intervention implemented through positive technology: a randomized control trial. Applied Research Quality Life 2017 Aug 21;13(3):671-689. [CrossRef]
    63. Cohen J. Statistical power analysis for the behavioral sciences. New York: Routledge; 2013.
    64. Blanca MJ, Alarcón R, Arnau J, Bono R, Bendayan R. Effect of variance ratio on ANOVA robustness: Might 1.5 be the limit? Behav Res Methods 2018 Jun;50(3):937-962. [CrossRef] [Medline]
    65. Elliott R, Timulak L. Descriptive and interpretative approaches to qualitative research. In: Miles J, Gilbert P, editors. A Handbook of Research Methods for Clinical and Health Psychology. New York: Oxford University Press; 2005:147-159.
    66. Windle G, Bennett KM, Noyes J. A methodological review of resilience measurement scales. Health Qual Life Outcomes 2011 Feb 04;9:8 [FREE Full text] [CrossRef] [Medline]
    67. Hill LG, Rosenman R, Tennekoon V, Mandal B. Selection effects and prevention program outcomes. Prev Sci 2013 Dec;14(6):557-569 [FREE Full text] [CrossRef] [Medline]
    68. Malani A, Reif J. National Bureau of Economic Research working paper 16593. Accounting for Anticipation Effects: An Application to Medical Malpractice Tort Reform. 2010 Dec.   URL: https://www.nber.org/papers/w16593.pdf [accessed 2020-03-27]
    69. Biau DJ, Kernéis S, Porcher R. Statistics in brief: the importance of sample size in the planning and interpretation of medical research. Clin Orthop Relat Res 2008 Sep;466(9):2282-2288 [FREE Full text] [CrossRef] [Medline]
    70. Richards D, Murphy T, Viganó N, Timulak L, Doherty G, Sharry J, et al. Acceptability, satisfaction and perceived efficacy of “Space from Depression” an internet-delivered treatment for depression. Internet Interv 2016 Sep;5:12-22 [FREE Full text] [CrossRef] [Medline]
    71. Richards D, Dowling M, O'Brien E, Viganò N, Timulak L. Significant events in an internet-delivered (Space from Depression) intervention for depression. Couns. Psychother. Res 2017 Sep 13;18(1):35-48. [CrossRef]
    72. Richards D, Timulak L, O'Brien E, Hayes C, Vigano N, Sharry J, et al. A randomized controlled trial of an internet-delivered treatment: Its potential as a low-intensity community intervention for adults with symptoms of depression. Behav Res Ther 2015 Dec;75:20-31 [FREE Full text] [CrossRef] [Medline]
    73. Wyatt T, Oswalt SB. Comparing Mental Health Issues Among Undergraduate and Graduate Students. American Journal of Health Education 2013 Mar 12;44(2):96-107. [CrossRef]


    Abbreviations

    ANOVA: analysis of variance
    BRS: Brief Resilience Scale
    CD-RISC: Connor–Davidson Resilience Scale
    PHI: Pemberton Happiness Index
    PHQ-4: Patient Health Questionnaire—4 Items
    PSS-4: Perceived Stress Scale—4 Items
    RSES: Rosenberg Self-Esteem Scale
    SAT: Satisfaction With Treatment


    Edited by J Torous; submitted 20.05.20; peer-reviewed by T Hendriks, J Wentzel, J Coumans; comments to author 29.06.20; revised version received 19.08.20; accepted 22.09.20; published 11.11.20

    ©Angel Enrique Roig, Olwyn Mooney, Alicia Salamanca-Sanabria, Chi Tak Lee, Simon Farrell, Derek Richards. Originally published in JMIR Formative Research (http://formative.jmir.org), 11.11.2020.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on http://formative.jmir.org, as well as this copyright and license information must be included.