Access to this full-text is provided by SAGE Publications Inc.
Content available from Digital Health
This content is subject to copyright.
Burst versus continuous delivery design in
digital mental health interventions: Evidence
from a randomized clinical trial
Marta Anna Marciniak
1,2
, Lilly Shanahan
1,3
, Kenneth S L Yuen
4,5
,
Ilya Milos Veer
6,7
, Henrik Walter
7
, Oliver Tuescher
4,5,8
, Dorota Kobylińska
9
,
Raffael Kalisch
4,5
, Erno Hermans
10
, Harald Binder
11,12
and Birgit Kleim
12
Abstract
Objective: Digital mental health interventions delivered via smartphone-based apps effectively treat various conditions; how-
ever, optimizing their efficacy while minimizing participant burden remains a key challenge. In this study, we investigated
the potential benefits of a burst delivery design (i.e. interventions delivered only in pre-defined time intervals) in comparison
to the continuous delivery of interventions.
Methods: We randomly assigned 93 participants to the continuous delivery (CD) or burst delivery (BD) group. The CD group
engaged in ReApp, a mobile app that increases positive cognitive reappraisal with a consistent delivery schedule that pro-
vides five prompts per day throughout the 3-week-long study, while the BD group received five daily prompts only in the first
and third weeks of the study.
Results: No significant differences were found between the groups in terms of adherence, mental health outcomes (specif-
ically depressive and anxiety symptoms), level of perceived stress, and perceived helpfulness of intervention. The BD group
showed a significantly decreased perceived difficulty of intervention over time.
Conclusions: The results suggest that the burst delivery may be as suitable for digital mental health interventions as the
continuous delivery. The perceived difficulty of the intervention declined more steeply for the BD group, indicating that it
improved the feasibility of the positive cognitive reappraisal intervention without hurting its efficacy. This outcome may
inform the design of less burdensome interventions with improved outcomes in future research.
Keywords
Ecological momentary intervention, digital health, mental health, intervention delivery, burst delivery design, reappraisal
Submission date: 19 October 2023; Acceptance date: 8 April 2024
1
Department of Psychology, University of Zurich, Zurich, Switzerland
2
Department of Psychiatry, Psychotherapy and Psychosomatics, Psychiatric
University Hospital (PUK), University of Zurich, Zurich, Switzerland
3
Jacobs Center for Productive Youth Development, University of Zurich,
Zurich, Switzerland
4
Leibniz Institute for Resilience Research (LIR), Mainz, Germany
5
Neuroimaging Center (NIC), Focus Program Translational Neuroscience
(FTN), Johannes Gutenberg University Medical Center, Mainz, Germany
6
Department of Developmental Psychology, University of Amsterdam,
Amsterdam, The Netherlands; Research Division of Mind and Brain,
Department of Psychiatry and Psychotherapy CCM
7
Charité - Universitätsmedizin Berlin, Corporate Member of Freie
Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of
Health, Berlin, Germany
8
Institute for Molecular Biology (IMB), Mainz, Germany
9
Institute for Psychology, University of Warsaw, Warsaw, Poland
10
Radboud University Medical Center, Donders Institute for Brain,
Cognition, and Behaviour, Nijmegen, The Netherlands
11
Institute of Medical Biometry and Statistics, Faculty of Medicine and
Medical Center, University of Freiburg, Freiburg, Germany
12
Freiburg Center for Data Analysis and Modelling, University of Freiburg,
Freiburg, Germany
Corresponding author:
Marta Anna Marciniak, Department of Psychology, University of Zurich,
Lengstrasse 31, 8032 Zurich, Switzerland.
Email: marta.marciniak@uzh.ch
Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.
org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is
attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
Original Research Article
DIGITAL HEALTH
Volume 10: 1–14
© The Author(s) 2024
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/20552076241249267
journals.sagepub.com/home/dhj
Introduction
Digital mental health interventions delivered via smartphones
are widely used in both preventative and intervention con-
texts,
1–4
demonstrating effectiveness both as standalone
treatments
5
and blended therapies.
6
They are usually admini-
strated as ecological momentary interventions (EMIs), that is,
treatments delivered to a person in real-life settings,
7
and typ-
ically tested in randomized trial designs. While they are
readily available for dissemination, cost-effective and easily
accessible,
8
the factors contributing to their effectiveness
have not yet been widely investigated.
8,9
Previous studies identified the components of cognitive–
behavioral therapy (CBT), gamification, personalization,
and social relationships as potential factors enhancing the
efficacy and acceptability of EMIs
10–12
. However, the
impact of the EMI delivery design remains largely unex-
plored. Existing reviews and meta-analyses on EMI
research have focused on the continuous delivery of inter-
ventions, wherein participants use the app consistently
throughout the whole study
2,5,13–16
in contrast to the burst
design, wherein the engagement of participants is only in
pre-defined time intervals. The same can be said for
Just-in-Time Adaptive Interventions (JITAIs), which are
EMIs adapting over time to a person’s changing internal
and contextual state.
17
JITAIs are typically tested in adjust-
able and changing-over-time study designs
18
(e.g. in terms
of the active components of the intervention or the fre-
quency of the intervention). However, to the best of our
knowledge, there have been no mental health-oriented
JITAIs tested with various delivery designs. Thus, there is
no empirical evidence that the continuous delivery design
is actually the most effective in reducing mental health
symptoms, increasing adherence, or enhancing participant
engagement.
At the same time, the burst delivery design has been a
longstanding practice in ecological momentary assessment
(EMA) studies utilized to capture the dynamics of psycho-
logical processes.
19
The application of the burst delivery
design in EMIs holds promise for addressing a key chal-
lenge in the mobile health (mHealth) field, namely, the
reduction of participants’burden and increase of their
engagement, especially in trials with extended intervention
periods lasting several months. Indeed, new EMI study
aims to employ the burst delivery design, such as a
9-month design with intervention bursts every 3 weeks,
alternating with “quiet”periods, during which participants
use the intervention at their own discretion without being
reminded about its usage.
20
Potentially, the burst delivery
design could also increase the effectiveness of EMIs.
Previous research suggested that interventions are most
effective when participants apply previously learned cogni-
tive strategies at moments when they are needed most.
21
However, such studies have so far not been conducted
under digital mental health settings.
The current study aims to investigate the differences
between two key delivery design features, both within-
and between-conditions, namely, continuous and burst
delivery designs of digital mental health interventions
with regard to the following outcomes:
•adherence to the EMI;
•changes in the perceived helpfulness and difficulty of the
intervention throughout the study;
•changes in the mental health indices (specifically depres-
sive and anxiety symptoms) and perceived stress in base-
line versus follow-up assessments; and
•changes in the target engagement involving changes in
the tendency to use the CBT component included in
the EMI in baseline versus follow-up assessments.
To investigate the abovementioned differences, we imple-
mented an EMI mobile app, called ReApp, for this rando-
mized clinical trial.
22
ReApp is solely based on the
therapeutic component of a positive cognitive reappraisal
(PCR), which eliminates the confounding factors associated
with using multiple therapeutic targets and strategies within
one EMI. Moreover, PCR is a well-researched and central
CBT component
23,24
and a core resilience factor
25,26
encouraging the users to find positive reinterpretations to
events appraised by them as stressful or negative.
27
Thus,
the findings pertaining to this therapeutic target can hold
high relevance for both clinical and preventative practices.
Methods
Study design
This study was a three-armed randomized controlled trial
but the current manuscript focuses on two arms involving
93 participants who received intervention and were ran-
domly allocated to either the continuous delivery (CD) or
burst delivery (BD) design group. The third arm, which
represents the control condition, did not receive an interven-
tion and is not included in the current work. Please see
Marciniak et al.
22
The study proposal received approval
from the Ethics Committee for the Faculty of Arts and
Social Sciences at the University of Zurich (approval
#21.2.12). The ClinicalTrials.gov identifier was
NCT05784831. We followed the CONSORT guidelines
while preparing the manuscript.
Ecological momentary intervention
ReApp, the intervention used in this study, comprised two
components: EMA and EMI. EMA was employed to
assess the participants’daily mood and adapted from
Vaessen et al. and Wackerhagen et al.
28,29
The participants
rated short sentences, such as “I feel sad”and “I feel peace-
ful,”on a scale from 1 (not at all) to 7 (very much). EMA
2DIGITAL HEALTH
was delivered five times per day at pseudo-random 3-hour
time windows from 8:00 AM to 10:30 PM. For three
times daily, EMA was combined with the EMI component,
with participants recalling a stressful event and generating
three positive reappraisals for this event.
Measures
We calculated the adherence (i.e. the number of completed
surveys by each participant) to investigate the differences
between the groups in terms of engagement with ReApp.
To investigate the perceived helpfulness and difficulty of
the EMI, we extracted two questions from the EMI proto-
col, namely, “I feel better after reappraising this experi-
ence,”and “It was hard for me to reappraise this
experience,”which were rated on a scale from 1 (not at
all) to 7 (very much) after each PCR exercise performed
within the app in the 21 days of the study duration. To
compare the effects of the EMI on mental health, we
assessed the depressive symptoms, anxiety and perceived
stress at the baseline and follow-up meetings (Figure 1)
using the German translations of the Beck Depression
Inventory-II (BDI-II),
30
the State–Trait Anxiety Inventory
(STAI)
31
and the Perceived Stress Scale (PSS).
32
The self-
reported tendency to use PCR was indexed by the Positive
Reappraisal subscale of the Cognitive Emotion Regulation
Questionnaire (CERQ)
33
and the Cognitive Reappraisal
scale of the Emotion Regulation Questionnaire (ERQ).
27
Study procedure
The enrolment of participants began in April 2021 and con-
cluded in May 2022. Individuals interested in the study com-
pleted a short online screening form. The inclusion criteria
were as follows: (1) being a student of a highereducation insti-
tution; (2) being 18–29 years old; (3) having sufficient knowl-
edge of the German language; (3) being a smartphone user;
and (4) obtaining 12 points or less on a 20-point Positive
Reappraisal subscale of CERQ. This cutoff score was set
based on the mean score obtained from the sample mirroring
the target population (i.e. healthy Swiss students of a higher
education institution). The exclusion criteria included self-
reported mental illness or ongoing psychotherapy. All partici-
pants provided written informed consent. Individuals who fit
the inclusion criteria were eligible to participate in the study.
Randomization into the groups was conducted with the use
of the n =3 block algorithm generated by an independent
researcher, meaning that three participants were grouped
together for randomization to the study conditions. At the
baseline meeting, which was conducted online due to restric-
tions related to the COVID-19 pandemic, the participants got
familiarized with the definition and examples of PCR and
were instructed on how to use the EMI, including the informa-
tion that they can self-initiate it whenever they feel they can
benefit from it. After an online baseline meeting, the partici-
pants used ReApp for 3 weeks, with the CD group participants
receiving five prompts reminding them to use ReApp every
day. The BD participants did not receive these prompts in
the second week of the study, but were encouraged to use
the app freely during this time depending on their needs.
After 3 weeks, the participants completed follow-up question-
naires (Figure 1). They received charts showing their mood
fluctuations during the period of app usage and were remun-
erated up to 105 Swiss francs or six university credit points
to compensate for their time.
Analysis
All analyses were performed in R (version 4.0.4) using R
Studio (version 1.4.1).
The sample size calculation was conducted with the use
of the sjstats package.
34
There were no publications com-
paring the burst and continuous delivery designs; hence,
we estimated the expected effect size based on the previous
Figure 1. Study design.
Marciniak et al. 3
work on the effects of EMIs on the mental health indices
comparing the intervention effects to the active control con-
ditions.
5,16
Hence, for the effect size of d =0.6, alpha of
0.05, and power of 80%, the required sample size was 94
participants. However, due to a prolonged recruitment
process, we stopped the recruitment at 93 participants.
Adherence was calculated as the sum of completed
surveys. Additionally, we separately indexed the number of
completed automatically-delivered surveys and the number
of user-initiated surveys for each condition. The statistical dif-
ference in adherence was evaluated with Student’st-tests. The
EMI data were nested within each day (21 days), further
within participant (93 participants), and two conditions. The
missing data were imputed with the k-nearest neighbor
method (i.e. mean of the neighbors nearest to the missing
value).
35,36
With this method, we imputed 9% of the EMI
data and none of the questionnaire data. For both EMI data
and questionnaires, we performed a linear mixed-model
(LMM) analysis with the nlme package.
37
Group and Time
were included in the models as fixed effects and interaction
terms. We also separately assessed the effect of Time for
both groups. We then calculated Nakagawa’smarginalR2
mvalues for the LMM (variance explained by fixed effects)
and included these numbers in Supplement 1. When
needed, data transformation was performed as described in
the Results section. The selection of the most suitable trans-
formation method was based on the visual inspection of data.
Results
Sample
Out of 756 individuals who completed the online screening,
147 were eligible to participate in the study and randomly
assigned to one of the three groups. Fifty participants were
assigned to the control condition. They did not receive inter-
vention, only a self-monitoring tool. These participants were
not included in the analyses of the current manuscript.
Please see Marciniak et al. for details.
22
Three participants
dropped out due to technical problems (i.e. the app did not
work on their phones). One dropped out without providing
a reason. Ninety-three participants from the BD and CD
groups completed the procedure (Figure 2).
Of the 93 participants, 89 were female, and four were
male. No participants identified as another gender. The
mean age of the participants was 22.01 years, SD =2.42,
and range 18–29. All participants were German-speaking
students from higher education institutions. No significant
differences in these characteristics between the groups
were found at baseline (Table 1).
Adherence
Each participant from the CD group received a total of 105
prompts, of which 63 included the EMI component. Each
participant from the BD group received 70 prompts, of
which 42 included the EMI component. Both groups had
the option to initiate more EMIs whenever they felt they
could benefit from them.
As expected, there was a significant difference (t=−2.86, p
=0.005) in completing the automatically delivered surveys,
with the CD group filling in more surveys (m=49.22, SD =
24.82, 47% of planned surveys vs. m=35.09, SD =19.86,
50% of planned surveys in BD) as they were prompted for
seven more days. On average, the BD group participants self-
initiated more surveys (m=40.16, SD =25.98) when com-
pared to the CD group ones (m=35.74, SD =29.38), but this
difference was not significant (t=0.59, p=0.555). Overall,
the CD participants on average completed 84.96 surveys
(81% of the surveys planned for this group), while the BD par-
ticipants completed 75.25 surveys (104% of the planned
surveys for this group). The difference between the groups in
the overall adherence (i.e. both prompted and self-initiated
surveys) was not significant (t=−1.65, p=0.103) (Table 2).
Perceived helpfulness of intervention
The intraclass correlation coefficient was significant ICC(1)
=0.59, p< 0.001, ICC(2) =0.97. There were no significant
effects of Time on either the CD or BD group (i.e. β=
−0.002, p=0.503, Cohen’sf=0.02, 90% CI [0.00, 0.07]
and β=0.003, p =0.590, Cohen’sf=0.02, 90% CI [0.00,
0.07], respectively), meaning that both groups showed no
differences in the perceived helpfulness of the EMI over
the course of the study. However, the CD participants con-
sistently assessed the intervention as more helpful, irrespect-
ive of the study time (i.e. β=−0.506, p=0.031, Cohen’sf=
0.21, 90% CI [0.02, 0.39]) (Tables 3, 4 and 5 and Figure 3).
The Group ×Time (21 days) interaction was not significant
(i.e. β=−0.056, p=0.340, Cohen’sf=0.02, 90% CI [0.00,
0.06]) and did not explain more variance compared to the
fixed effect of Group (R2m=0.026 for both Group ×Time
interaction and Group, see Supplement 1).
Perceived difficulty of intervention
The intraclass correlation coefficient was significant ICC(1)
=0.41, p< 0.001, ICC(2) =0.94. Due to the non-normal
distribution of residuals, a data transformation was per-
formed, and visual inspection revealed that the best fit
was the model with the square root-transformed data. The
Group ×Time (21 days) interaction was significant (i.e. β
=0.007, p=0.001, Cohen’sf=0.08, 90% CI [0.05,
0.12]), and there was a significant effect of Time in the
BD group (i.e. β=−0.007, p< 0.001, Cohen’sf=0.13,
90% CI [0.07, 0.18]), but not in the CD group (i.e. β=
0.001, p=0.343, Cohen’sf=0.03, 90% CI [0.00, 0.08]),
indicating that the perceived difficulty of the intervention
significantly decreased only in the BD group, as shown in
Tables 3, 4 and 5 and Figure 4).
4DIGITAL HEALTH
Correlations between perceived helpfulness and
difficulty of the intervention
There was a modest, but statistically significant negative
correlation found between helpfulness and difficulty of
the EMI in the full sample (i.e. r =−0.15, p< 0.001, 95%
CI [−0.19, −0.11], t(1951) =−6.67). This correlation
was, however, more negative in the BD group (i.e. r =
−0.22, p< 0.001, 95% CI [−0.28, −0.16], t(943) =−7.06)
than in the CD group (i.e. r =−0.06, p< 0.05, 95% CI
[−0.12, 0.00], t(1006) =−1.99). These results suggest that
the reappraisals which were easier for the participants to
generate, were also perceived as somewhat more helpful
to them, especially in the BD group.
Perceived stress scale
The intraclass correlation coefficient was not significant
ICC(1) =0.13, p=0.105, ICC(2) =023, indicating a low
heterogeneity of the scores. Due to the non-normal distribu-
tion of residuals, a data transformation was performed, and
visual inspection revealed that the best fit was the model
with the square root-transformed data. The Group ×Time
(baseline vs. follow-up) interaction was not significant
(i.e. β=0.075, p=0.692, Cohen’sf=0.04, 90% CI [0.00,
0.21]) and did not explain more variance than the fixed
effect of time (R2m=0.065 for both, see Supplement 1).
There was a significant effect of Time in both the CD and
BD groups (i.e. β=−0.339, p=0.015, Cohen’sf=0.36,
Figure 2. CONSORT flow diagram.
Marciniak et al. 5
90% CI [0.11, 0.61] and β=−3.818, p=0.002, Cohen’sf=
0.51, 90% CI [0.25, 0.78], respectively). This result indi-
cates that both groups decreased their level of perceived
stress, as shown in Tables 3, 4 and 5 and Figure 5A.
State–Trait Anxiety Inventory
The intraclass correlation coefficient was significant (i.e.
ICC(1) =0.51, p< 0.001, ICC(2) =067). Due to the non-
normal distribution of residuals, a data transformation was
performed, and visual inspection revealed that the best fit
was the model with the log-transformed data. The Group
×Time (baseline vs. follow-up) interaction was not
significant (i.e. β=−0.048, p=0.440, Cohen’sf=0.08,
90% CI [0.00, 0.25]). However, there was a significant
effect of Time on the CD group (i.e. β=−0.085, p=
0.009, Cohen’sf=0.39, 90% CI [0.14, 0.64]), indicating
a decrease in the anxiety symptoms over the course of
the study. In the BD group, the effect of Time was not sig-
nificant (i.e. β=−0.037, p=0.505, Cohen’sf=0.10,
90% CI [0.00, 0.35]), as shown in Tables 3, 4 and 5 and
Figure 5B.
Table 1. Demographic characteristics of the sample.
Continuous
delivery
Burst
delivery
Statistical
difference
Mean age [SD] 22.02 [2.55] 22.00 [2.30] t=−0.04,
p=0.968
Percentage of
students
100% 100% –
Gender ratio Female/
Male/Diverse
47/2/0
95.9%
female
42/2/0
95.4%
female
t=0.109,
p=0.914
Mean baseline score
in PSS [SD]
21.80 [6.82] 22.46 [6.69] t=0.50,
p=0.640
Mean baseline score
in STAI [SD]
43.61 [9.39] 41.30 [9.41] t=-1.19,
p=0.238
Mean baseline score
in BDI-II [SD]
13.59 [8.38] 13.96 [9.00] t=0.20,
p=0.841
Mean baseline score
in CERQ [SD]
9.51 [2.04] 9.09 [2.03] t=−0.99,
p=0.324
Mean baseline score
in ERQ [SD]
21.46 [4.55] 19.55 [4.89] t=−1.95,
p=0.054
Mean follow-up
score in PSS [SD]
18.65 [5.79] 18.64 [5.81] Tables 3, 4
and 5
Mean follow-up
score in STAI [SD]
41.06 [8.97] 40.14 [8.47] Tables 3, 4
and 5
Mean follow-up
score in BDI-II [SD]
10.04 [7.86] 8.87 [7.82] Tables 3, 4
and 5
Mean follow-up
score in CERQ [SD]
12.84 [3.30] 12.30 [3.30] Tables 3, 4
and 5
Mean follow-up
score in ERQ [SD]
23.35 [4.07] 20.17 [5.04] Tables 3, 4
and 5
Table 2. Adherence rates.
Continuous
delivery
Burst
delivery
Statistical
difference
Completed
automatically
delivered surveys
[SD]
49.22
[24.82]
(out of
105)
35.09 [19.86]
(out of 70)
t=−2.86,
p=0.005
Completed user-
initiated surveys [SD]
35.74
[29.38]
40.16 [25.98] t=0.59,
p=0.555
All completed surveys
[SD]
84.96
[30.69]
75.25 [31.92] t=−1.65,
p=0.103
Figure 3. Perceived helpfulness of intervention.
6DIGITAL HEALTH
Table 3. Results of interaction analyses.
Outcome Value (β)SE DF t-Value p-Value Cohen’sf[90% CI]
Perceived helpfulness
Intercept 3.626 0.17 1858 21.72 <0.001
Time 0.003 0.00 1858 0.55 0.581 0.00 [0.00, 0.02]
Group 0.506 0.23 91 2.20 0.031 0.21 [0.02, 0.39]
Time ×Group −0.056 0.00 1858 −0.85 0.340 0.02 [0.00, 0.06]
Perceived difficulty
Intercept 1.191 0.04 1858 44.74 <0.001
Time −0.006 0.00 1858 −4.19 <0.001 0.05 [0.01, 0.09]
Group −0.067 0.06 91 −1.14 0.258 0.04 [0.00, 0.19]
Time ×Group 0.007 0.00 1858 3.61 0.001 0.08 [0.05, 0.12]
Perceived stress (PSS)
Intercept 4.680 0.11 91 43.35 <0.001
Time −0.414 0.14 91 −3.02 0.003 0.42 [0.24, 0.60]
Group −0.073 0.15 91 −0.49 0.623 0.03 [0.00, 0.19]
Time ×Group 0.075 0.19 91 0.40 0.692 0.04 [0.00, 0.21]
Anxiety symptoms (STAI)
Intercept 5.33 0.05 91 117.34 <0.001
Time −0.037 0.04 91 −0.83 0.411 0.21 [0.02, 0.39]
Group 0.080 0.06 91 1.28 0.205 0.11 [0.00, 0.28]
Time ×Group −0.048 0.06 91 −0.78 0.440 0.08 [0.00, 0.25]
Depressive symptoms (BDI-II)
Intercept 3.539 0.20 91 18.00 <0.001
Time −0.866 0.19 91 −4.60 <0.001 0.59 [0.40, 0.78]
Group −0.074 0.27 91 −0.27 0.786 0.02 [0.00, 0.17]
Time ×Group 0.256 0.26 91 0.99 0.325 0.10 [0.00, 0.28]
Reappraisal (CERQ)
Intercept 2.996 0.06 91 48.20 <0.001
Time 0.479 0.07 91 6.53 <0.001 1.01 [0.79, 1.22]
(continued)
Marciniak et al. 7
Table 3. Continued.
Outcome Value (β)SE DF t-Value p-Value Cohen’sf[90% CI]
Group 0.069 0.09 91 0.80 0.424 0.11 [0.00, 0.28]
Time ×Group 0.010 0.10 91 0.10 0.920 0.01 [0.00, 0.10]
Reappraisal (ERQ)
Intercept 19.549 0.70 91 27.98 <0.001
Time 0.621 0.78 91 0.80 0.427 0.25 [0.08, 0.43]
Group 1.913 0.96 91 1.99 0.050 0.33 [0.16, 0.51]
Time ×Group 1.267 1.07 91 1.18 0.241 0.12 [0.00, 0.30]
Table 4. Main effect of time for continuous delivery.
Outcome Value (β)SE DF t-Value p-Value Cohen’sf[90% CI]
Perceived helpfulness
0.02 [0.00, 0.07]
Intercept 4.131 0.14 958 30.38 <0.001
Time −0.002 0.00 958 −0.67 0.503
Perceived difficulty
0.03 [0.00, 0.08]
Intercept 1.851 0.04 958 47.37 <0.001
Time 0.001 0.00 958 0.95 0.343
Perceived stress (PSS)
0.36 [0.11, 0.61]
Intercept 4.607 0.11 48 44.81 <0.001
Time −0.339 0.13 48 −2.51 0.015
Anxiety symptoms (STAI)
0.39 [0.14, 0.64]
Intercept 5.414 0.04 48 125.74 <0.001
Time −0.085 0.03 48 −2.71 0.009
Depressive symptoms (BDI-II)
0.51 [0.25, 0.76]
Intercept 3.465 0.19 48 18.23 <0.001
Time −0.609 0.17 48 −3.51 0.001
Reappraisal (CERQ)
0.97 [0.68, 1.26]
Intercept 3.064 0.06 48 52.46 <0.001
Time 0.489 0.07 48 6.75 <0.001
Reappraisal (ERQ)
0.37 [0.12, 0.61]
Intercept 21.462 0.62 48 34.80 <0.001
Time 1.888 0.73 48 2.57 0.013
8DIGITAL HEALTH
Beck Depression Inventory II
The intraclass correlation coefficient was significant (i.e.
ICC(1) =0.40, p< 0.001, ICC(2) =0.57). Due to the non-
normal distribution of residuals, a data transformation was
performed, and visual inspection revealed that the best fit
was the model with the square root-transformed data. The
Group ×Time (baseline vs. follow-up) interaction was
not significant (i.e. β=0.256, p=0.325, Cohen’sf=0.10,
90% CI [0.00, 0.28]). There was a significant effect of
Time on both the CD and BD groups (i.e. β=−0.609, p<
0.001, Cohen’sf=0.51, 90% CI [0.25, 0.76] and β=
−0.866, p< 0.001, Cohen’sf=0.68, 90% CI [0.40, 0.96],
respectively), meaning that both groups experienced a
decrease in depressive symptoms over the course of the
study, as shown in Tables 3, 4 and 5 and Figure 5C.
Positive Reappraisal scale, Cognitive Emotion
Regulation Questionnaire
The intraclass correlation coefficient was not significant
(i.e. ICC(1) =0.04, p=0.658, ICC(2) =−0.08), indicating
a low heterogeneity of the scores. Due to the non-normal
distribution of residuals, a data transformation was per-
formed, and visual inspection revealed that the best fit
Table 5. Main effect of time for burst delivery.
Outcome Value (β)SE DF t-Value p-Value Cohen’sf[90% CI]
Perceived helpfulness
0.02 [0.00, 0.07]
Intercept 3.626 0.19 900 19.06 <0.001
Time 0.003 0.00 900 0.54 0.590
Perceived difficulty
0.13 [0.07, 0.18]
Intercept 1.919 0.04 900 42.75 <0.001
Time −0.007 0.00 900 −3.86 <0.001
Perceived stress (PSS)
0.51 [0.25, 0.78]
Intercept 22.455 0.94 43 23.77 <0.001
Time −3.818 1.13 43 −3.38 0.002
Anxiety symptoms (STAI)
0.10 [0.00, 0.35]
Intercept 5.334 0.05 43 117.28 <0.001
Time −0.037 0.06 43 −0.67 0.505
Depressive symptoms (BDI-II)
0.68 [0.40, 0.96]
Intercept 3.539 0.19 43 18.43 <0.001
Time −0.866 0.19 43 −4.47 <0.001
Reappraisal (CERQ)
1.03 [0.71, 1.33]
Intercept 2.181 0.04 43 55.54 <0.001
Time 0.291 0.04 43 6.72 <0.001
Reappraisal (ERQ)
0.12 [0.00, 0.37]
Intercept 19.550 0.75 43 26.11 <0.001
Time 0.621 0.78 43 0.79 0.432
Marciniak et al. 9
was the model with the square root-transformed data. The
Group ×Time (baseline vs. follow-up) interaction was
not significant (i.e. β=0.010, p=0.920 Cohen’sf=0.01,
90% CI [0.00, 0.10]). There was a significant effect of
Time in both the CD and BD groups (i.e. β=0.489, p<
0.001, Cohen’sf=0.97, 90% CI [0.68, 1.26] and β=
0.291, p< 0.001, Cohen’sf=1.03, 90% CI [0.71, 1.33],
respectively), meaning that both groups experienced an
increase in the tendency to use PCR over the course of
the study, as shown in Tables 3, 4 and 5 and Figure 5D.
Cognitive Reappraisal scale, Emotion Regulation
Questionnaire
The intraclass correlation coefficient was significant (i.e.
ICC(1) =0.40, p< 0.001, ICC(2) =0.57). The Group ×
Time (baseline vs. follow-up) interaction was again not sig-
nificant (i.e. β=1.267, p=0.241, Cohen’sf=0.12, 90% CI
[0.00, 0.30]). There was a significant effect of time on the
CD group (i.e. β=1.888, p=0.013, Cohen’sf=0.37,
90% CI [0.12, 0.61]), indicating an increase in the tendency
to use reappraisal. By contrast, it had no significant effect
on the BD group (i.e. β=0.621, p =0.432, Cohen’sf=
0.12, 90% CI [0.00, 0.37]), as presented in Tables 3, 4
and 5 and Figure 5E.
Discussion
In this study, we investigated the differences in the effect-
iveness of continuous versus burst delivery design of
digital mental health. To the best of our knowledge, this
is one of the first studies offering a comparison of the fol-
lowing: (a) adherence to the EMI; (b) changes in the per-
ceived helpfulness and difficulty of intervention over the
course of the study; (c) changes in the mental health
indices (i.e. depressive and anxiety symptoms as well as
perceived stress in baseline versus follow-up assessment);
and (d) self-reported target engagement (i.e. changes in
the tendency to use PCR in baseline versus follow-up
assessment) between the burst and continuous intervention
delivery designs of a mobile app employing a core CBT
technique and the important resilience factor - positive cog-
nitive reappraisal.
Despite the BD group not receiving reminders to use the
app for 1 week during the study, there was no statistically
significant difference in the overall adherence rate (i.e. the
number of completed surveys) between the groups. There
were no significant differences between the groups in any
of the mental health outcomes (i.e. perceived stress,
anxiety symptoms, and depressive symptoms) as well as
in the two scales measuring the tendency to use PCR.
There was a significant difference in the perceived difficulty
of the intervention, meaning that the PCR generation
became significantly easier over time for the BD group,
whereas no such change was observed for the CD group.
Compared to the BD group, the CD group consistently
assessed the intervention as more helpful, irrespectively
of time. The correlation analysis revealed a modest associ-
ation between the reappraisals which were easier for parti-
cipants to generate, and their higher perceived
helpfulness, especially in the BD group.
These findings suggest that the burst design may be just
as effective as the continuous design for digital mental
health interventions supporting the mental well-being, at
least for the reappraisal-based EMIs. Across most of the
measured outcomes, there were either no significant differ-
ences between the groups or only marginal disparities in
effect sizes, but with reduced participant burden in the
burst delivery group. The sole significant difference
observed over the course of the study was in the perceived
difficulty of the intervention, which showed a notable
decrease only in the BD group.
One of the putative mechanisms induced by the burst
delivery of the digital interventions may be the sense of
agency in the intervention process. The sense of agency
refers to the awareness that individuals have control over
their actions and thoughts
38
and the consequences of
these actions.
39
Allowing the BD group participants to
self-initiate the intervention without reminders in the
second week of the study may have enhanced their sense
of agency. Enhancing the client’s agency is indeed a key
Figure 4. Perceived difficulty of intervention.
10 DIGITAL HEALTH
Figure 5. Mental health outcomes of intervention.
Marciniak et al. 11
concept in psychotherapy, and many psychotherapeutic
protocols define the improvement in the mental well-being
of the client as a result of mobilizing their agency and using
interventions to heal themselves.
40
Empirical evidence sug-
gests that increases in the sense of agency during the thera-
peutic process are related to improvements in mental health
outcomes.
41
However, little is known about the role of the
sense of agency in digital mental health interventions.
Existing studies suggest that mHealth EMIs indeed
provide a sense of agency to boost an active role in man-
aging depression, anxiety, and somatoform disorders in
clinical populations
15
and increase the sense of agency
after discharge from the hospital.
42
However, research on
preventative EMIs is scarce; hence, we cannot directly pin-
point the changes in the BD outcomes to this mechanism.
Another explanation may be that participants who were
less burdened with the app were more internally motivated
to be involved in the process, thereby performing better in
the PCR component and achieving somewhat better out-
comes in terms of the perceived difficulty of the interven-
tion. In contrast, the continuous reminders in the CD
group may have influenced external motivation, possibly
leading to a habitual completion of the PCR component
without genuine engagement. Previous reports suggested
that low treatment motivation predicts the dropout rate and
therapeutic success in anxiety disorders.
43
Participants with
anxiety symptoms may possibly respond better to continuous
reminders about the treatment than to the burst delivery of
interventions whose use is highly dependent on the partici-
pants’motivation to continue the process. Accordingly,
future research could investigate the individual differences
in the response patterns concerning the two delivery
designs. This could be a key to finding the most suitable
delivery approach tailored to individuals facing distinct
mental health challenges.
This study has a few limitations, including a homogen-
ous, mostly female sample with a low tendency to use
PCR, warranting replications in diverse general populations
and clinical samples. We have not only observed an
increased interest in the interventional study in female par-
ticipants, which is a common case in psychological
research, but also a higher tendency to use PCR, as
indexed with CERQ, in men. As a result, many male parti-
cipants have not fulfilled the inclusion criteria of this study.
The study design could be improved by introducing longer
intervention periods with alternating burst and “quiet”
periods to assess the stability of the burst delivery effects.
Additionally, more research is needed to validate the use
of burst delivery in digital interventions targeting alterna-
tive domains and therapeutic techniques.
Conclusion
The current study showed that the burst delivery design
holds promise for maintaining the effectiveness of digital
mental health interventions while alleviating participant
burden, thereby offering a potential improvement over the
continuous delivery method. Although this addresses one
of the major challenges in the mHealth field, more studies
are needed to find a mechanism underlying these differ-
ences. Nonetheless, our findings can inform future research
in terms of designing digital mental health interventions as
well as feasibility studies and randomized clinical trials
with less participant effort.
Acknowledgments:The authors would like to thank students and
interns Letizia Dassie, Mattia Mantovani, Gesine Schrade, and
Clemens von Wulffen for their involvement in the data
collection process. Thank you to Zuzana Kasanova and Judith
van Leeuwen for their contribution to the development of ReApp.
Contributorship: MM, LS, RK, EH, and BK were involved in the
development of ReApp. MM and BK conceptualized the study.
MM was responsible for data collection and, in consultation
with HB and BK, had a primary role in the statistical
conceptualization and analysis. MM wrote the first version of
the manuscript. All authors contributed to and approved the final
version of the manuscript.
Declaration of conflicting interests: The authors declared no
potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.
Ethical approval: This study was performed in line with the
principles of the Declaration of Helsinki. Approval was granted
by The Ethics Committee of the Faculty of Arts and Social
Sciences of the University of Zurich (#20.6.11). Informed
consent was obtained from all individual participants included in
the study. ClinicalTrials.gov Identifier NCT05623826.
Funding: The authors disclosed receipt of the following financial
support for the research, authorship, and/or publication of this
article: European Union’s Horizon 2020 research and innovation
program under grant agreement No. 777084 (DynaMORE
project). MM also received support from the UZH Foundation
through its Digital Entrepreneur Fellowship.
Data availability: The self-report data used to support the
findings of this study are restricted by the Ethics Committee of
the Faculty of Arts and Social Sciences of the University of
Zurich in order to protect participants’privacy. Data are
available from the corresponding author under reasonable request.
Guarantor: MM
ORCID iD: Marta Anna Marciniak https://orcid.org/0000-0003-
4301-3269
Supplemental material: Supplemental material for this article is
available online.
12 DIGITAL HEALTH
References
1. Ang WHD, Chew HSJ, Dong J, et al. Digital training for build-
ing resilience: systematic review, meta-analysis, and
meta-regression. Stress Health J Int Soc Investig Stress 2022
Dec; 38(5): 848–869. doi:10.1002/smi.3154
2. Balaskas A, Schueller SM, Cox AL, et al. Ecological
momentary interventions for mental health: a scoping
review. PLOS ONE 2021; 16: e0248152.
3. Lecomte T, et al. Mobile apps for mental health issues:
meta-review of meta-analyses. JMIR MHealth UHealth
2020; 8: e17458.
4. Coppersmith DDL, et al. Just-in-Time adaptive interventions
for suicide prevention: promise, challenges, and future direc-
tions. Psychiatry 2022; 85: 317–333.
5. Marciniak MA, et al. Standalone smartphone cognitive behav-
ioral therapy–based ecological momentary interventions to
increase mental health: narrative review. JMIR MHealth
UHealth 2020; 8: e19836.
6. Mathiasen K, Andersen T, Riper H, et al. Blended CBT versus
face-to-face CBT: a randomised non-inferiority trial. BMC
Psychiatry 2016; 16: 432.
7. Heron KE and Smyth JM. Ecological momentary interven-
tions: incorporating mobile technology into psychosocial
and health behaviour treatments. Br J Health Psychol 2010;
15: 1–39.
8. Gutierrez LJ, Castro LA and Banos O. Challenges and oppor-
tunities for designing technology-based ecological moment-
ary interventions (EMIs) in mental health. In: Bravo J,
Ochoa S and Favela J (eds) Proceedings of the
International Conference on Ubiquitous Computing &
Ambient Intelligence (UCAmI 2022). Cham: (Springer
International Publishing, 2023), pp.888–899. doi:10.1007/
978-3-031-21333-5_88.
9. Paganini S, et al. Stress management apps: systematic search
and multidimensional assessment of quality and characteris-
tics. JMIR MHealth UHealth 2023; 11: e42415.
10. Jakob R, et al. Factors influencing adherence to mHealth apps
for prevention or management of noncommunicable diseases:
systematic review. J Med Internet Res 2022; 24: e35371.
11. Wu A, et al. Smartphone apps for depression and anxiety: a
systematic review and meta-analysis of techniques to increase
engagement. NPJ Digit Med 2021; 4: 20.
12. Yang Q and Van Stee SK. The comparative effectiveness of
mobile phone interventions in improving health outcomes:
meta-analytic review. JMIR MHealth UHealth 2019; 7:
e11244.
13. Anastasiadou D, Folkvord F and Lupiañez-Villanueva F. A
systematic review of mHealth interventions for the support
of eating disorders. Eur Eat Disord Rev 2018; 26: 394–416.
14. Loo Gee B, Griffiths KM and Gulliver A. Effectiveness of
mobile technologies delivering ecological momentary inter-
ventions for stress and anxiety: A systematic review. JAm
Med Inform Assoc 2016; 23: 221–229.
15. Patel S, et al. The acceptability and usability of digital health
interventions for adults with depression, anxiety, and somato-
form disorders: qualitative systematic review and meta-
synthesis. J Med Internet Res 2020; 22: e16228.
16. Versluis A, Verkuil B, Spinhoven P, et al. Changing mental
health and positive psychological well-being using
ecological momentary interventions: a systematic review
and meta-analysis. J Med Internet Res 2016; 18(6): e152.
doi:10.2196/jmir.5642
17. Nahum-Shani I, Smith SN, Spring BJ, et al. Just-in-Time
adaptive interventions (JITAIs) in mobile health: key compo-
nents and design principles for ongoing health behavior
support. Ann Behav Med 2016; 52(6): 446–462. doi:10.
1007/s12160-016-9830-8
18. Lei H, Nahum-Shani I, Lynch K, et al. A ‘SMART’design for
building individualized treatment sequences. Annu Rev Clin
Psychol 2012; 8: 21–48.
19. Sliwinski MJ. Measurement-burst designs for social
health research. Soc Personal Psychol Compass 2008; 2:
245–261.
20. Bögemann SA, et al. Investigating two mobile just-in-time
adaptive interventions to foster psychological resilience:
research protocol of the DynaM-INT study. BMC Psychol
2023; 11: 245.
21. Papp KV, Walsh SJ and Snyder PJ. Immediate and delayed
effects of cognitive interventions in healthy elderly: a
review of current literature and future directions. Alzheimers
Dement 2009; 5: 50–60.
22. Marciniak MA, et al. ReApp –an mHealth app increasing
reappraisal: results from two randomized controlled trials.
Preprint at https://doi.org/10.31234/osf.io/u4f5e (2023).
23. Kalisch R. The functional neuroanatomy of reappraisal: time
matters. Neurosci Biobehav Rev 2009; 33: 1215–1226.
24. Nowlan JS, Wuthrich VM and Rapee RM. Positive
reappraisal in older adults: a systematic literature review.
Aging Ment Health 2015; 19: 475–484.
25. Kalisch R, Müller MB and Tüscher O. A conceptual frame-
work for the neurobiological study of resilience. Behav
Brain Sci 2015; 38: e92.
26. Riepenhausen A, et al. Positive cognitive reappraisal in stress
resilience, mental health, and well-being: a comprehensive
systematic review. Emot Rev 2022; 14: 310–331.
27. Gross JJ and John OP. Individual differences in two emotion
regulation processes: Implications for affect, relationships,
and well-being. J Pers Soc Psychol 2003; 85: 348–362.
28. Vaessen T, et al. ACT In daily life in early psychosis: an eco-
logical momentary intervention approach. Psychosis 2019;
11: 93–104.
29. Wackerhagen C, et al. Dynamic modelling of mental resili-
ence in young adults: protocol for a longitudinal observational
study (DynaM-OBS). JMIR Res Protoc 2023; 12: e39817.
30. Kühner C, Bürger C, Keller F, et al. [Reliability and validity
of the Revised Beck Depression Inventory (BDI-II). Results
from German samples]. Nervenarzt 2007; 78: 651–656.
31. Laux L, Glanzmann P, Schaffner P, et al. Das State–
Trait-Angstinventar. Theoretische Grundlagen und
Handanweisung. Weinheim (1981).
32. Klein EM, et al. The German version of the perceived stress
scale —Psychometric characteristics in a representative
German community sample. BMC Psychiatry 2016; 16: 159.
33. Garnefski N and Kraaij V. The Cognitive Emotion Regulation
Questionnaire: Psychometric features and prospective relationships
with depression and anxiety in adults. Eur J Psychol Assess 2007;
23: 141–149.
34. Lüdecke D. sjstats: Collection of Convenient Functions for
Common Statistical Computations. (2021).
Marciniak et al. 13
35. Gad AM and Abdelkhalek RHM. Imputation methods for lon-
gitudinal data: A comparative study. Int J Stat Distrib Appl
2017; 3: 72.
36. Moeur M and Stage AR. Most similar neighbor: an improved
sampling inference procedure for natural resource planning.
For Sci 1995; 41: 337–359.
37. Pinheiro J, et al. nlme: Linear and Nonlinear Mixed Effects
Models. (2021).
38. Gallagher S. Philosophical conceptions of the self: implications
for cognitive science. Trends Cogn Sci 2000; 4: 14–21.
39. Moore JW. What is the sense of agency and why does itmatter?
Front Psychol 2016; 7. doi:10.3389/fpsyg.2016.01272
40. Williams DC and Levitt HM. Principles for facilitating agency
in psychotherapy. Psychother Res 2007; 17: 66–82.
41. Adler JM. Living into the story: Agency and coherence in a
longitudinal study of narrative identity development and
mental health over the course of psychotherapy. J Pers Soc
Psychol 2012; 102: 367–389.
42. Duffy A, Christie GJ and Moreno S. The challenges toward real-
world implementation of digital health design approaches: narra-
tive review. JMIR Hum Factors 2022; 9: e35693.
43. Taylor S, Abramowitz JS and McKay D. Non-adherence and
non-response in the treatment of anxiety disorders. J Anxiety
Disord 2012; 26: 583–589.
14 DIGITAL HEALTH