Digital Attention Training: Improving the Mental Health and Well-Being of Adolescent Youth ()
1. Introduction
High school students face a wide array of daily stressors and distractions that relate to their mental health and well-being (Pascoe et al., 2020). Stressors can range from specific life events, such as parental divorce or changing schools, to common daily stressors, such as difficulties with friends, family, schoolwork, weight, and health problems (Low et al., 2012). Stress in adolescence is associated with negative outcomes such as decreased well-being and mental health disorders such as anxiety and depression (Troy & Mauss, 2011). In fact, mood disorders, such as depression, have been steadily rising among adolescents since the early 2000s (Twenge et al., 2019). School loneliness is also on the rise alongside decreases in well-being and life satisfaction (Twenge et al., 2021). Given these unfortunate trends, learning how to effectively promote teens’ mental health and well-being at scale is essential (Marikutty & Joseph, 2016; Denovan & Macaskill, 2017).
1.1. Impact of COVID-19 on Adolescent Mental Health Issues
During the peak of COVID-19, many high school students experienced a drastic decline in their well-being due to the uncertainty, daily disruptions, and stress brought on by the pandemic (Ma et al., 2021; Rao & Rao 2021). This decline in well-being was exacerbated by factors such as impaired sleep, worries about the future, grief, and social isolation (Thakur, 2020). Adolescence is a developmental stage with an increased need for peer interaction, and the physical distancing measures mandated to contain the spread of COVID-19 were especially burdensome for this age group (Orben et al., 2020). And for many students, school is not only a place for socialization, but also a place for safety. According to a nationwide survey conducted by the Center for Disease Control and Prevention, in the first half of 2021, 55% of teens reported suffering emotional abuse from an adult in their house in the preceding year, and 11% reported suffering from physical abuse. From this same sample, 44% of teens reported persistent feelings of sadness or hopelessness that prevented them from participating in normal activities, and 9% reported an attempt at suicide (New CDC Data, 2022). The peak of the pandemic was undeniably a challenging time for many adolescents.
However, despite substantial research showing the negative impacts that COVID-19 has had on the mental health of adolescents, a few studies have found mixed results. For example, one study on adolescents found either no difference or an improvement in mental health symptoms during the pandemic (Stewart et al., 2021). Some research suggested that previously “healthy” adolescents experienced a significant decrease in their mental health during the pandemic (Cohen et al., 2021), while other studies found the opposite (Hu & Qian, 2021). This conflicting research highlights the importance of fully understanding the impact of the pandemic on adolescents and the need for deeper investigation on how to best address their well-being post-pandemic.
Nonetheless, even in this post-pandemic era, teenagers still appear to be experiencing high levels of mental health-related issues (Puteikis et al., 2022). According to the recent White House Report on Mental Health Research Priorities (White House Report, 2023), trends that existed prior to the pandemic, such as increases in symptoms of depression and anxiety among youth, still persist today, if not in even higher numbers. For example, as students adjusted back to in-person learning after the pandemic, many students faced high anxiety about returning to school, despite their negative attitudes about online learning (Assavanopakun et al., 2022; Widnall et al., 2022). Despite some of the acute sources of stress dissipating as the pandemic became more globally managed, the psychological and emotional toll has lingered.
These residual psychological effects may continue. Research has even suggested that, of all age demographics, children and adolescents will likely face the largest burdens of the pandemic, leading to an entire generation more vulnerable to mental health difficulties (Clemens et al., 2020). Despite this high risk for ongoing mental health issues, youth are often being left behind in the ongoing research related to COVID-19 (Racine et al., 2022).
1.2. Promoting Mental Health and Well-Being
Addressing severe mental health issues is of critical importance, but all teens, even those without clinical-level struggles, have been affected by the pandemic. It is worthwhile to examine and support the psychological and emotional well-being of all teenagers. According to the World Health Organization (2023), “Mental health is more than the absence of mental disorders. Mental health is a state of well-being in which an individual realizes his or her own abilities, can cope with the normal stresses of life, can work productively and is able to make a contribution to his or her community.” The current work focuses on this state of well-being where individuals can manage their challenges and feel good despite them. Well-being is related to many constructs, including life satisfaction (Ojha & Kumar, 2017), stress management (Ponsoda, 2017), resilience (Dorado Barbé et al., 2021), emotion regulation (Balzarotti et al., 2016), and the absence of mood disorders such as depression and anxiety (Yüksel & Bahadir-Yilmaz, 2019).
1.3. A Promising Direction: Attention Training
One increasingly common intervention to promote well-being is mindfulness-based attention training (Laukkonen et al., 2020; Mrazek et al., 2021a; Mrazek et al., 2017). Research suggests that attention training can help children as young as 5 years old as well as elementary, middle, and high school students (Gould et al., 2016; Carsley et al., 2018). Some experts even propose that attention training may be especially helpful for older students due to the strengthened metacognitive and abstract thinking skills of adolescents (Zenner et al., 2014).
Mindfulness-based attention training typically involves both the development of attentional skills as well as instruction on how to apply these skills to relate effectively to thoughts and emotions (Mrazek et al., 2022). As such, numerous studies show that attention training can help to address the serious and escalating issues of not only distraction, but also stress, emotion dysregulation, and mental illness among adolescents (Carsley et al., 2018; Lin et al., 2019; Zoogman et al., 2015; Mrazek et al., 2020; Mrazek et al., 2019b).
The link between attention training and reduced distraction is obvious, especially in an era of ubiquitous smartphone use (Mrazek et al., 2021b). However, for many people the link between attention training and improved well-being is less clear. Influential models of emotion regulation emphasize that individuals can influence their emotional states by choosing where they direct their attention or by using attention to influence their cognitive appraisals (McRae & Gross, 2020). For example, a student can pay attention to her teacher lecturing or she can pay attention to the anxious thoughts popping up about an upcoming exam. Where she focuses in that moment will affect how she feels (and how much she learns).
Attention training can also help individuals overcome their attentional biases, such as the bias to direct one’s attention to negative information. This bias has been shown to predict later depression (Disner et al., 2017). Given rising rates of mental illness among adolescents, methods for helping high school students train their attention have merit not only for enhancing academic achievement but also for promoting mental health and well-being (Twenge, 2019).
1.4. The Limitations of Current Solutions
The need for reliable programs to promote adolescent well-being is high (O’Connor et al., 2018), and early results look promising (Barry et al., 2017). Current approaches to improving adolescents’ well-being include: i) community-based activities, such as targeted educational programs, as well as ii) individual interventions, such as cognitive-behavioral therapy (Das et al., 2016). Although it is encouraging that these approaches have shown early signs of efficacy (Salam et al., 2016), they both have their limitations. Many community-based programs fail to be standardized and sufficiently engaging for teenagers (Das et al., 2016; Baños et al., 2017). And most individual interventions fail to be scalable, accessible, and affordable for the many teens that could benefit from them.
In sum, current approaches have demonstrated the capacity to improve adolescents’ well-being, but there is much room for improvement (O’Connor et al., 2018). If we are indeed facing the risk of an entire generation more vulnerable to mental health difficulties, we need bold solutions. To effectively promote the mental health and well-being of teens at scale, it becomes crucial to design interventions that can be standardized for high fidelity of implementation across diverse school environments and can be shared with limited logistical or financial burdens.
1.5. A Scalable Solution: Digital Interventions
While training attention represents a promising approach for improving the well-being of adolescents, as described above, only a small fraction of high school students ever receives this training. Digital interventions in particular can circumvent many of the logistical and financial constraints involved in providing effective training to millions of high school students (Mrazek et al., 2019a). Digital interventions also allow for the standardization of key content, thereby ensuring all students receive the same high-quality instruction (Kenney et al., 2004; Puzziferro & Shelton, 2008), while simultaneously having the ability to provide content that is personalized to the abilities, interests, and values of individual students (Dixson, 2010; Wang, 2014).
One of the primary concerns regarding digital interventions is whether they can truly be as effective as in-person approaches. This is a legitimate concern given how challenging it is to maintain students’ engagement when solely relying on digital strategies. As just one example of this challenge, the average completion rate of Massive Open Online Courses (MOOCs) is below 5% (Onah et al., 2014; Kizilcec et al., 2013; Seaton et al., 2014). This 95% dropout rate highlights how most digital approaches are ineffective. The impact of digital interventions has clear advantages, but only when the intervention is designed in a way to maintain necessary engagement.
Despite this challenge, digital attention training interventions have been found to be effective. Previous work suggests that completing a 22-day online training program in school led to significant increases in emotion regulation and stress management among teens (Mrazek et al., 2022; Mrazek et al., 2020; Mrazek et al., 2019b). Another study found that both digital and face-to-face mindfulness interventions were equally effective in helping reduce perceived levels of depression, anxiety, and stress (Krusche et al., 2013). Even the White House Report (2023) on Mental Health Research Priorities highlighted the rise of digital interventions for mental health and the need for them to be effective, usable, accessible, and scalable. Therefore, digital attention training interventions have the potential to produce a scalable solution for addressing adolescents’ mental health and well-being, but they must be designed in a way to maximize engagement.
1.6. Obstacles to School-Based Interventions
Despite the extensive research on the effectiveness of digital attention training interventions, there are still multiple obstacles to implementing these interventions in the classroom. These obstacles are not limited to digital attention training, however. Across disciplines, difficulties with implementing interventions in schools has led to the emergence of a significant “research-to-practice” gap (Fixsen et al., 2013; McMahon & Cullinan, 2014). Interventions in schools are frequently implemented with low fidelity (Oliver et al., 2015), and only 25% - 50% of interventions are implemented with comparable levels of fidelity as the original demonstration published in the literature (Gottfredson & Gottfredson, 2002). A major barrier to effective implementation in schools is the transferability of interventions from controlled environments to real-world classroom settings (Kasari & Smith, 2013).
Since implementing interventions in the classroom can be demanding, it is important to consider which barriers may be most challenging to overcome. Across much of the research surrounding school-based interventions, lack of time is a consistent barrier (Barry et al., 2020; Forman et al., 2009; Pinkelman et al., 2015; Rasmussen et al., 2020). Other barriers include lack of resources, lack of funding, staff turnover, and sometimes even difficulties with the interventions themselves (Forman et al., 2009; Pinkelman et al., 2015; Rasmussen et al., 2020; Arnold et al., 2021). Nonetheless, there are various factors that can improve fidelity of implementation. Support and “buy-in” by both teachers and administrators have commonly been cited as one of the most important factors in successful implementation (Forman et al., 2009; Pinkelman et al., 2015). Consistency and strong communication between the school and researchers also helps to enable interventions in schools (Pinkelman et al., 2015; Arnold et al., 2021). Thus, digital attention training interventions have the potential to be efficacious in school settings, however bridging the gap from research to classroom settings is a difficult challenge to overcome.
1.7. Overview of the Current Study
The current study investigated the effectiveness of a digital attention training intervention on high school students’ mental health and well-being. Taking place in Spring 2022, the current study investigated the effects of once students resumed in-person learning and continued to adjust to the post-pandemic era. We predicted that students who participated in the online attention training intervention would experience higher well-being, as indexed by improved affect, life satisfaction, stress management, resilience, and emotion regulation, as well as improved mental health as indexed by decreased depression and anxiety.
Given the numerous challenges of rolling out interventions in real school districts, the current study is a proof of concept study as a strategic first step before implementing more rigorous assessments of the intervention’s efficacy on mental health and well-being measures. This methodological approach aims to demonstrate feasibility of an intervention before it is fully implemented across schools in order to elucidate key challenges the intervention may face in real-world classroom settings that can later be addressed (Simons et al., 2016).
2. Method
2.1. Research Design
As is common in proof of concept intervention research, the current study used a one-group pre-post design. The research was approved by the Human Subjects Committee at the host university, and informed consent was obtained from all students and their guardians, as well as a letter of approval for research by the principal of each school.
2.2. Procedure
Students who would be completing the intervention were additionally invited to complete an anonymous online survey before and after the intervention. Students were allowed to complete the intervention, regardless of whether they wanted to complete the surveys. Teachers were encouraged to allow students to complete the two 15-minute surveys during class time. In addition, teachers were provided with a short script to read to their students detailing the importance of providing careful and honest answers to the survey questions. All self-report data were gathered in these surveys, while the digital learning platform objectively recorded intervention adherence.
Prior to data collection, we conducted a power analysis to determine the required sample size for our study. Based on the research question and prior literature, we aimed to detect a small effect size (Cohen’s d = 0.2) with 80% power and a significance level of 0.05. The effect size of 0.2 was chosen based on prior research that has shown small to medium effect sizes for similar interventions (Smith et al., 2018). The alpha level of 0.05 reflects the conventionally accepted level of statistical significance in the field. Using G*Power software (Faul et al., 2007), we determined that a minimum sample size of 199 participants was needed. To account for potential attrition or incomplete data due to failed attention checks, we aimed to recruit approximately 240 students.
2.3. Participants
The sample consisted of students enrolled in high schools (9 - 12th grade) across the United States and Costa Rica. Seven high schools volunteered to participate, with one teacher from each school facilitating the intervention during class time. Out of these seven high schools, six schools were public high schools, and one was a private, non-profit institution in Costa Rica. In the United States, four schools were located in California, one in Maine, and one in Oregon. The percentage of students receiving free and reduced priced lunch ranged from 21% to 77%. All schools had returned to full in-person learning for the previous 4 - 6 months prior to the intervention, except one school that was still using a hybrid approach. In total, the intervention was shared with 317 students.
Teachers were encouraged to invite their students to participate in the research surveys; however, the research component was optional. A total of 239 students opted to complete the pre-test survey. In alignment with the “intention-to-treat” approach (Gupta, 2011), teachers were asked to share the post-test survey with every student who completed the pre-test survey regardless of intervention adherence. A total of 200 students completed the post-test. Since student participation in the research surveys was voluntary and separate from their participation in the intervention, this attrition was largely driven by students choosing to opt-out of the second survey. Adherence data from these students were still available and are reported.
Pre-test and post-test surveys were linked using anonymous student ID codes assigned to each student. We only included entries that were clearly the correct match from pre-test to post-test, leaving a sample of 175 students. This attrition was driven by a number of students who completed the post-test, but did not originally complete the pre-test, leading to their student ID code being unmatchable.
Lastly, both our pre-test and post-test surveys included an embedded attention check to identify students who were not closely reading the survey items. The attention check consisted of a statement randomly embedded in the survey stating, “For this question, please simply mark ‘strongly disagree.’ This will let us know you’re paying attention to the survey.” Answer choices included a 6-point Likert scale from Strongly Disagree to Strongly Agree with Strongly Disagree being the only accepted answer. We only included data from students who passed both the pre-test and post-test attention check, leaving a final sample of 122 students for data analysis.
Demographic information was collected at pre-test. There were 95 freshmen, 19 sophomores, four juniors, and four seniors. Participants were asked what gender they identified with, and 57 said male, 56 said female, two said nonbinary, and seven preferred not to say. The frequency of students identifying with specific races was as follows: Asian—6 (5%); Caucasian—59 (48%); Hispanic/Latino—40 (33%); Black—0 (0%); Native Hawaiian or Other Pacific Islander—2 (2%); Mix of two or more races—9 (7%). Three students selected “Other,” and three students selected “Prefer not to say.”
2.4. Intervention
Students received attention training via a free online intervention called Finding Focus. The intervention was delivered through a custom digital learning platform that allowed students to access the intervention on computers, tablets, or phones. The entire intervention included 2.5 hours of content, including four 12-min lessons and daily 4-min exercises. Content unlocked over 22 days, with one lesson unlocking each week and an exercise unlocking each day. Teachers were encouraged to have students complete the lessons and daily exercises during class.
The intervention was designed to help students learn how to train their ability to focus and apply this skill to relate more effectively to their thoughts, evaluations, and emotions. The weekly lessons presented three fundamental skills: anchoring, focusing, and releasing (Mrazek et al., 2017). Anchoring was defined as deciding where to focus. Focusing was defined as directing your attention to a specific thing. Releasing was defined as letting something go by not giving it any more attention.
These three fundamental skills were trained through daily exercises. During each exercise, students were encouraged to deliberately anchor their attention on a specific aspect of their experience, such as the sensations of their breathing or the sounds of the music. Students focused on this anchor released all other distracting thoughts and perceptions outside of their present experience. Students also learned how to use the skills of anchoring, focusing, and releasing in daily life by applying specific strategies such as re-focusing (releasing a counterproductive thought and choosing a more worthwhile anchor) and re-evaluating (releasing an unhelpful evaluation and focusing on a more empowering one). As such, the digital intervention had a strong emphasis on training students to reduce the impact of internal distractions.
The entire intervention was delivered using a custom digital learning platform that provided content tailored to the interests of individual users. For example, students indicated their preferred music genre and then received daily exercises in this genre. Each student was encouraged to complete the intervention independently during class time, however students were also able to complete content on their own outside of class. The digital learning platform provided teachers with an interface to track student progress throughout the intervention.
2.5. Measures
Validated self-report instruments were used whenever possible. In cases where no validated instrument existed to address the specific research question of interest, researcher-developed measures were used. All of these measures were written to maximize face validity using vocabulary that is appropriate for adolescents. The order of instruments in both surveys, as well as the order of questions within each instrument, was randomized to prevent carry-over effects. Exact wording of all instruments are posted on the Open Science Framework: https://doi.org/10.17605/OSF.IO/XS8RZ
Fidelity of implementation (FOI). The intervention’s FOI was objectively monitored through the digital learning platform, which automatically records whether and when every student completes each lesson and exercise of the intervention. The platform recorded completion of lessons and daily exercises for all students who created accounts regardless of whether they completed the optional pre-test or post-test surveys. Due to the anonymous nature of the survey, adherence data could not be linked to survey data.
Additionally, students were asked two yes or no questions in the post-test survey to report their teacher’s expectations for completing the intervention as well as whether or not they were given some sort of credit for completing the course.
Life satisfaction. One question, an adapted version of the Cantril Scale for adolescents, was used to determine a participant’s satisfaction with their current life (Mazur et al., 2018). Participants were presented with a picture of a ladder and told to imagine that the top of the ladder represented the best possible life for them and the bottom of the ladder represented the worst possible life. Participants responded to the question, “Where on the ladder do you feel you stand at the present time?” on a scale from 0 - 10 where higher values indicated higher life satisfaction.
Positive and negative affect. The Scale of Positive and Negative Experiences (SPANE) is a 12-item scale asking participants to rate how often they experience various emotional states (Diener et al., 2010). This scale produces a score for positive feelings (6 items; pre-test: a = 0.892; post-test: a = 0.879), a score for negative feelings (6 items; pre-test: a = 0.844; post-test: a = 0.833), and the difference of the two can be calculated to create an affect ratio score. Participants responded on a scale of 1-Very Rarely to 5-Very Often based on what they had experienced for the previous four weeks. Participant scores for this scale were averaged across the items for positive emotions and negative emotions separately, with higher scores indicating stronger emotional expression. Total participant score for negative emotion was then subtracted from the total participant score for positive emotion to obtain an overall emotional affect ratio. Scores on each subscale could range from 1 to 5, while the score on the overall ratio could range from -4 to 4 with more positive ratios indicating stronger positive affect relative to negative affect.
Stress management. A brief 4-item scale measured each participant’s ability to effectively manage their stress. Participants responded on a scale of 1-Strongly Disagree to 6-Strongly Agree (“I know how to manage my stress in healthy ways”; pre-test: a = 0.835; post-test: a = 0.867). Participant scores for this scale were averaged across the items, with higher scores indicating better stress management skills.
Resilience. The Brief Resilience Scale (BRS) is a 6-item scale assessing each participant’s ability to bounce back or recover from stress (Smith et al., 2008). Participants responded on a scale of 1-Strongly Disagree to 5-Strongly Agree (“I tend to bounce back quickly after hard times”; pre-test: a = 0.856; post-test: a = 0.850). Participant scores for this scale were averaged across the items, with higher scores indicating stronger resilience. Where necessary, items were reverse coded.
Depression and anxiety. The Patient Health Questionnaire for Depression and Anxiety (PHQ-4) consists of two brief 2-item subscales assessing depression (“Feeling down, depressed, or hopeless”; pre-test: a = 0.689; post-test: a = 0.845) and anxiety (“Not being able to stop or control worrying”; pre-test: a = 0.852; post-test: a = 0.887). Participants responded to each item based on how often they had been bothered by these problems within the previous two weeks on a four-point scale from 0-Not at all to 3-Nearly every day. Participant scores for each subscale were summed with higher scores indicating more severe symptoms of depression or anxiety. The total score for each subscale could range from 0 to 6 (Kroenke et al., 2009). Scale scores of ≥3 were suggested as cut-off points between the normal range and probable cases of clinical levels of depression or anxiety (Kroenke et al., 2003; Kroenke et al., 2007; Löwe et al., 2005).
Emotion regulation. The Emotion Regulation Questionnaire for Children and Adolescents (ERQ-CA) is a version of the Emotion Regulation Questionnaire that is adapted to be more appropriate for ages 10 - 18 (Gullone & Taffe, 2012). This scale consists of two subscales assessing cognitive reappraisal (“I control my feelings about things by changing the way I think about them”) and expressive suppression (“When I’m feeling bad (e.g., sad, angry, or worried), I’m careful not to show it”). All questions were answered on a scale of 1-Strongly Disagree to 6-Strongly Agree (pre-test: a = 0.773; post-test: a = 0.853). Given ambiguity regarding the appropriateness of expressive suppression as a healthy strategy for emotion regulation, only the cognitive reappraisal subscale was included. Participant scores for this scale were averaged across the items, with higher scores indicating higher emotion regulation.
2.6. Data Analysis Overview
Intervention adherence was assessed using the average percentage of lessons and exercises that students completed at each school. The assessment of student outcomes was done using the quantitative self-report measures from the surveys. For ease of interpretation, paired t-tests were used to examine changes in quantitative data from pre-test to post-test1. Once again, given that the study design utilized a one-group design without a control condition, all changes from pre-test to post-test should be interpreted as preliminary.
3. Results
3.1. Fidelity of Implementation (FOI)
Across the sample of 122 students, 63% reported that their teacher set a clear expectation that they should complete the lessons and daily exercises. Only 42% of students said that they were given some sort of credit for completing the course. On average, 88% of lessons were completed (96% for lesson 1; 90% for lesson 2; 87% for lesson 3; 79% for lesson 4). Students completed 81% of the daily exercises.2
3.2. Baseline Measures
The nine measures of mental health and well-being were highly associated with one another. Table 1 shows the correlation matrix for all measures at pre-test before students completed the intervention.
Table 1. Spearman correlation matrix of all measures at pre-test.
Variable |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
1. Life Satisfaction |
1 |
|
|
|
|
|
|
|
|
2. Positive Affect |
0.513* |
1 |
|
|
|
|
|
|
|
3. Negative Affect |
−0.506* |
−0.653* |
1 |
|
|
|
|
|
|
4. Affect Ratio |
0.550* |
0.896* |
−0.912* |
1 |
|
|
|
|
|
5. Stress Management |
0.465* |
0.557* |
−0.560* |
0.620* |
1 |
|
|
|
|
6. Resilience |
0.335* |
0.518* |
−0.606* |
0.620* |
0.626* |
1 |
|
|
|
7. Depression |
−0.560* |
−0.634* |
0.579* |
−0.667* |
−0.493* |
−0.460* |
1 |
|
|
8. Anxiety |
−0.339* |
−0.496* |
0.684* |
−0.658* |
−0.581* |
−0.579* |
0.448* |
1 |
|
9. Emotion Regulation |
0.322* |
0.457* |
−0.248** |
0.385* |
0.390* |
0.325* |
−0.266** |
−0.213*** |
1 |
Note. *p < 0.0001; **p < 0.01; ***p < 0.05; Affect Ratio represents Positive Affect minus Negative Affect.
3.3. Assessment of Changes Over Time in Student Outcomes
Life satisfaction. Students reported a significant increase in life satisfaction from pre-test (M = 6.22, SD = 2.00) to post-test (M = 7.05, SD = 1.74), t(120) = 5.328, p < 0.001, d = 0.48. Results for changes in life satisfaction are shown in Figure 1(a).
Positive and negative affect. Scores for affect were categorized into positive affect, negative affect, and an affect ratio serving as the ratio of positive emotions relative to negative emotions. Students reported a significant increase in positive affect from pre-test (M = 3.53, SD = 0.74) to post-test (M = 3.68, SD = 0.73), t(121) = 2.838, p = 0.005, d = 0.26. The intervention did not have a significant effect on negative affect from pre-test (M = 2.63, SD = 0.82) to post-test (M = 2.57, SD = 0.76), t(121) = -1.100, p = 0.274, d = -0.10. However, driven by the increase in positive affect, students did report a significant increase in their affect ratio from pre-test (M = 0.90, SD = 1.41) to post-test (M = 1.11, SD = 1.31), t(121) = 2.355, p = 0.020, d = 0.21. Results for the change in affect ratio are shown in Figure 1(b).
Note. All error bars represent 95% confidence intervals.
Figure 1. Average Changes in Key Measures from Pre-test to Post-test.
Stress management. Students reported a significant increase in stress management from pre-test (M = 3.75, SD = 1.02) to post-test (M = 4.17, SD = 0.97), t(121) = 5.828, p < 0.001, d = 0.53, as shown in Figure 1(c).
Resilience. Students reported a significant increase in resilience from pre-test (M = 3.09, SD = 0.76) to post-test (M = 3.21, SD = 0.73), t(120) = 2.327, p = 0.022, d = 0.21, as shown in Figure 1(d).
Depression and anxiety. Scores were calculated separately for symptoms of depression and anxiety. Students reported a decrease in depression from pre-test (M = 1.92, SD = 1.71) to post-test (M = 1.53, SD = 1.73), t(121) = -2.833, p = 0.005, d = -0.26. Additionally, students reported a decrease in anxiety from pre-test (M = 2.44, SD = 1.90) to post-test (M = 2.07, SD = 1.69), t(121) = -2.715, p = 0.008, d = -0.25. Results for symptoms of depression and anxiety are shown in Figure 1(e) and Figure 1(f), respectively. Effects were also observed specifically among students experiencing clinical levels of depression and anxiety. At pre-test, 8.5% of students scored as only clinically depressed, 21.7% of students scored as only clinically anxious, and 21.7% scored as both clinically depressed and anxious. Students who scored as clinically depressed at pre-test reported a significant decrease in their levels of depression from pre-test (M = 4.09, SD = 1.14) to post-test (M = 2.82, SD = 1.66), t(10) = -2.51, p < 0.031, d = -0.76. Similarly, students who scored as clinically anxious at pre-test reported a significant decrease in their levels of anxiety from pre-test (M = 3.89, SD = 1.01) to post-test (M = 2.52, SD = 1.37), t(26) = -4.76, p < 0.001, d = -0.92.
Emotion regulation. Contrary to previous work (Mrazek et al., 2019b), students did not show a significant increase in levels of emotion regulation from pre-test (M = 3.89, SD = 0.78) to post-test (M = 3.99, SD = 0.82), t(121) = 1.382, p = 0.170, d = 0.13.
4. Discussion
How well students regulate their attention has important implications for their mental health and well-being. Although existing research indicates that other well-being interventions can lead to positive outcomes among adolescents, these interventions have yet to accomplish a standardized and scalable approach, while simultaneously engaging students (Das et al., 2016; Baños et al., 2017). The primary goal of this research was to investigate the preliminary effectiveness of a digital intervention for attention training on high school students’ well-being in the post-pandemic era.
The present research found that a 22-day attention training intervention could be delivered digitally in classrooms with strong fidelity of implementation and benefits to well-being. Although the one-group pre-post design precludes definitive conclusions about whether the intervention benefited students, there were significant increases in self-reported life satisfaction, positive affect, stress management, and resilience. Reductions in depression and anxiety were also reported, even among the small sample of those with clinical levels of symptoms.
Despite these changes, students did not show a significant increase in levels of emotion regulation. This is contrary to previous work (Mrazek et al., 2022; Mrazek et al., 2020; Mrazek et al., 2019b), and future research will be necessary to replicate this null effect and determine why students’ emotion regulation was not impacted. However, this null effect does not mean there is no true effect, just that one was not detected. For instance, with a bigger sample size and greater power, the effect might have been detected. However, this null effect in the present study suggests that students may not have been driven by motivation effects or demand characteristics, which would predict improvements across all measures. In fact, the emotion regulation measure was the measure most explicitly linked to intervention content. In Lesson 3 of the intervention, students learned how to use the skill of re-evaluating (e.g., cognitive reappraisal) to alter their emotional experience, and the emotion regulation measure includes items specifically about their ability to use this skill. Therefore, it is surprising to not changes on this measure, but it does provide some reason to believe that students may not have been motivated to respond in a particular way to please the researchers.
Finally, it is worth considering effect sizes in the context of educational interventions. Effect sizes are a critical metric for evaluating the success of educational interventions, but they must be interpreted carefully and in light of the specific context of the intervention. A standardized mean difference effect size of 0.20 is often considered a large effect in real-world education settings, and brief interventions often have effect sizes of 0.10 s.d. or lower (Yeager et al., 2019). Researchers should be cautious about interpreting such small effect sizes as indicating a lack of meaningful impact. Educational interventions that show small or moderate short-term effects can still be valuable, particularly if they have the potential to produce larger, long-term effects (Yeager et al., 2018). In the present study, effect sizes were larger than hypothesized. The two largest effects were for stress management (d = 0.53) and life satisfaction (d = 0.48). If we were to speculate why these variables had the greatest change, it may be because adolescence is a developmental stage with high stress yet low awareness of effective strategies. Students are reporting that after completing the intervention they feel more capable of managing their stress. Perhaps learning concrete tools to use on a moment-to-moment basis paves the way for students to feel more satisfied with their life as a whole.
4.1. Further Insight into School-Based Interventions
As described previously, common barriers to implementing interventions in schools include: i) lack of time, ii) lack of resources, iii) lack of funding, iv) staff turnover, and v) sometimes even difficulties with the interventions themselves (Barry et al., 2020; Forman et al., 2009; Pinkelman et al., 2015; Rasmussen et al., 2020; Arnold et al., 2021). In order to preemptively address these barriers, this study utilized various approaches to make the intervention more accessible and feasible. To combat the barrier of time, we designed the intervention to take just 2.5 hours in total. To combat the barrier of resources, we checked in with each teacher to see what they might be missing. All schools were 1:1 with digital devices provided by the district, but not all schools had a sufficient supply of headphones. Given this close communication, our team was able to send extra headphones in advance. To combat the barrier of funding, we shared the intervention with all schools free of charge. To combat the barrier of staff turnover, we communicated with the principal at each school to find the right classroom to share the intervention. We touched base regularly to outline our goals and expectations for greater transparency. Finally, to combat issues with the digital intervention itself, our team conducted regular quality assurance tests to identify and fix any bugs in the software prior to students or teachers encountering the problem. We predict that these efforts played a major role in the high completion rates by students (88% of lessons; 81% of daily exercises).
Although the implementation of the attention training intervention was successful, the real-world research assessment of the intervention posed its own logistical challenges. Many students who completed the pre-test survey did not complete the post-test survey, and vice versa. Despite providing each teacher with a report of which students finished the survey in real-time, collecting complete data from all students was difficult. Additionally, 30% of the students did not read the survey items carefully, as indexed by failing the attention check. Future efforts should aim to create more “buy-in” from students throughout the entire research process in order to obtain larger sample sizes with reliable data. One potential solution would be to design the research surveys to be more enjoyable with a more engaging and rewarding user experience (UX). Callahan et al. (2017) recommend measuring the “social validity” of an intervention to assess individuals’ satisfaction with the goals and outcomes of an intervention. Such satisfaction could also be assessed regarding the research experience itself.
4.2. Limitations and Future Directions
Despite the promising outcomes of this intervention in high school settings, this study has several limitations. First, due to the lack of a control group, changes in student outcomes must be considered preliminary. Without a control condition, it cannot be determined whether the intervention or a variety of other confounds, such as student maturation, teacher influence, testing effects, or the passage of time were responsible for the changes observed. Therefore, causal claims are minimized as these potential confounding variables cannot be accounted for. Similarly, this work may be subject to regression to the mean, which refers to the tendency for extreme scores on a measure to move closer to the group mean upon retesting, even in the absence of any intervention. In the present study, this may have inflated the observed changes in the outcome measures from pre- to post-intervention, as individuals with extreme scores at baseline may have shown less extreme scores at follow-up due to regression to the mean, rather than due to the intervention itself. Although we attempted to control for this by using multiple measures that are conceptually similar, we cannot rule out the possibility that regression to the mean affected our results. Future work should evaluate the efficacy of the intervention using a randomized controlled trial to rule out these alternative explanations.
As outlined by Simons and colleagues, an appropriate control group is necessary for interventions to draw any causal outcomes. However, it is common for preliminary research to consist of a limited research design that is later tested using more rigorous standards (Simons et al., 2016). Although a randomized controlled trial would unequivocally provide greater confidence in the causality of these results, we believe that scientific understanding is an incremental process where small steps forward are often necessary to eventually achieve more rigorous and definitive studies. Such proof of concept studies are extremely common in clinical and education research. In fact, promising interventions for improving university students’ mental health issues due to the pandemic have used this same one-group pre-post design (Gabrielli et al., 2021). Publishing simpler studies with limitations can provide a foundation for future research to build upon, helping to gradually refine and improve study designs. This is particularly important during the development phase of intervention design, where efficacy trials are not yet warranted. Particularly when conducting intervention research in school settings, a single-group pre-post design is much more feasible. Therefore, the present study design was a practical approach to collecting meaningful data on the intervention.
A second limitation is that there was significant attrition in the student sample from pre-test to post-test. For the assessment of this intervention’s ability to scale, it is important to note that this attrition was not among the intervention sample: 88% of students completed the lessons and 81% completed the daily exercises. Instead, the concerning attrition was among the research sample (e.g. the surveys before and after the intervention). This attrition was largely driven by different sets of students who chose to complete each survey, leading to unmatchable data. The participating school districts offered the research component to students as an optional “opt-in” experience; therefore, some students decided to only complete one of the two surveys. Future research should collaborate more with teachers, schools, and districts to emphasize the importance of students completing both surveys.
Third, given that the data in this study are self-reported, there is potential for various biases to impact the validity of the results. For example, improvements in the surveys could be attributed to participant bias, or students conforming their answers to what they think will please the researchers. Additionally, differences in student emotional state or cognitive biases at the time of each survey completion may influence responses, leading to changes in data. To combat this, the current study used mostly well-known, validated self-report instruments to address the research questions. In the two cases where no validated instrument existed for the specific research question of interest, researcher-developed measures were used instead. Future research could bolster these findings by including more objective measures, such as impacts on student grades or physical health changes, to minimize the impact of self-reported biases.
Finally, additional studies with larger samples drawn from diverse schools will be necessary to determine to what extent these findings generalize across other student populations. The current sample came from six school districts across the nation (and one in Central America). In this sample, 33% of students identified as Hispanic, 5% identified as Asian, 7% identified as mixed race, and 0% identified as Black. The sample consisted of similar levels of male and female students, as well as two students identifying as nonbinary and seven preferring not to say. Socioeconomic data were not collected at the individual level, but rather at the school level. The limited diversity in this study’s sample, specifically among race and socioeconomic status, narrows the generalizability of these results to broader populations of students. Therefore, researchers should consider the potential limitations of this sample when interpreting the findings and drawing conclusions. Although these findings may generalize to broader samples of high school students, larger samples will allow researchers to detect smaller effect sizes and conduct moderation analyses.
4.3. Conclusion
Attention plays an important role in the management of stress and emotions. In this post-pandemic era, with adolescent students’ mental health in a vulnerable state, the need for reliable and accessible mental health interventions that can be used in school settings is enormous. This research investigated the preliminary effectiveness of a digital intervention for attention training on high school students’ well-being in the post-pandemic era, finding improvements in life satisfaction, positive affect, stress management, and resilience, and decreases in depression and anxiety. The current study also emphasized the usefulness of proof of concept studies as a strategic first step before implementing more rigorous assessments in school settings. This study highlighted obstacles in school-based research that led to a significant “research-to-practice” gap and offered solutions to combat many common challenges. Therefore, this research provides important contributions to address the growing need for effective, usable, accessible, and scalable digital mental health interventions among high school students. Future research should continue to explore whether digital attention training interventions, like Finding Focus, can be a promising path to improving the mental health and well-being of adolescent students at scale, particularly in this new post-pandemic era.
NOTES
1An additional analytic method was taken to account for the effect of school: multilevel regression with testing occasions nested within students and schools included as a fixed factor. Given that there were only seven schools in the dataset, including school as fixed was the appropriate approach to control for any confounding influences (Maas & Hox, 2005). All effects remained significant using this analytic method (all ts > 2.35, all ps < 0.02).
2This adherence data was collected for six of the seven participating teachers. The remaining teacher displayed the lessons and daily exercises on a collective screen, so accurate completion rates by student are not available.