ArticlePublisher preview available

A flexible micro-randomized trial design and sample size considerations

SAGE Publications Inc
Statistical Methods in Medical Research
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Technological advancements have made it possible to deliver mobile health interventions to individuals. A novel framework that has emerged from such advancements is the just-in-time adaptive intervention, which aims to suggest the right support to the individuals when their needs arise. The micro-randomized trial design has been proposed recently to test the proximal effects of the components of these just-in-time adaptive interventions. However, the extant micro-randomized trial framework only considers components with a fixed number of categories added at the beginning of the study. We propose a more flexible micro-randomized trial design which allows addition of more categories to the components during the study. Note that the number and timing of the categories added during the study need to be fixed initially. The proposed design is motivated by collaboration on the Diabetes and Mental Health Adaptive Notification Tracking and Evaluation study, which learns to deliver effective text messages to encourage physical activity among patients with diabetes and depression. We developed a new test statistic and the corresponding sample size calculator for the flexible micro-randomized trial using an approach similar to the generalized estimating equation for longitudinal data. Simulation studies were conducted to evaluate the sample size calculators and an R shiny application for the calculators was developed.
This content is subject to copyright.
Original Research Article
A flexible micro-randomized trial design
and sample size considerations
Jing Xu1,2 , Xiaoxi Yan1, Caroline Figueroa3,4,
Joseph Jay Williams5,6,7,8,9,10 and Bibhas Chakraborty1,2,11,12
Statistical Methods in Medical Research
2023, Vol. 32(9) 1766–1783
© The Author(s) 2023
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/09622802231188513
journals.sagepub.com/home/smm
Abstract
Technological advancements have made it possible to deliver mobile health interventions to individuals. A novel frame-
work that has emerged from such advancements is the just-in-time adaptive intervention, which aims to suggest the
right support to the individuals when their needs arise. The micro-randomized trial design has been proposed recently
to test the proximal effects of the components of these just-in-time adaptive interventions. However, the extant micro-
randomized trial framework only considers components with a fixed number of categories added at the beginning of
the study. We propose a more flexible micro-randomized trial design which allows addition of more categories to the
components during the study. Note that the number and timing of the categories added during the study need to be
fixed initially. The proposed design is motivated by collaboration on the Diabetes and Mental Health Adaptive Notifica-
tion Tracking and Evaluation study, which learns to deliver effective text messages to encourage physical activity among
patients with diabetes and depression. We developed a new test statistic and the corresponding sample size calculator for
the flexible micro-randomized trial using an approach similar to the generalized estimating equation for longitudinal data.
Simulation studies were conducted to evaluate the sample size calculators and an R shiny application for the calculators
was developed.
Keywords
mHealth, just-in-time adaptive intervention, micro-randomized trial, generalized estimating equation, longitudinaldata
1 Introduction
Mobile health (mHealth) is a term used to refer to the practice of medicine and health supported by mobile or wearables
devices1that are increasingly indispensable in our daily lives. It provides convenient support to various health domains
1Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
2Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore, Singapore
3Faculty of Technology, Policy and Management, Delft University of Technology, The Netherlands
4School of Social Welfare, University of California, Berkeley, USA
5Department of Computer Science, University of Toronto, ON, Canada
6Department of Statistical Sciences, University of Toronto, ON, Canada
7Department of Psychology, University of Toronto, ON, Canada
8Vector Institute for Artificial Intelligence Faculty Affiliate, University of Toronto, ON, Canada
9Department of Mechanical and Industrial Engineering, University of Toronto, ON, Canada
10Department of Economics, University of Toronto, ON, Canada
11Department of Statistics and Data Science, National University of Singapore, Singapore
12Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA
Corresponding author:
Jing Xu, Centre for Quantitative Medicine, Duke-NUS Medical School, 8 College Road Singapore 169857, Singapore.
Email: kenny.xu@duke-nus.ed.sg
... Cohn et al. (27) developed a sample size formula for MRTs with binary outcomes that can also be extended to count outcomes. In addition, Dempsey et al. (34) developed a stratified MRT and provided a sample size formula, and Xu et al. (138) proposed Review in Advance. Changes may still occur before final publication. ...
... a flexible MRT design, allowing for the addition of intervention options during the study, and derived corresponding sample size formulas. Available software for sample size calculation includes an R Shiny app for MRTs with continuous outcomes, MRT-SS-Continuous (111); an R package for MRTs with binary outcomes, MRTSampleSizeBinary (136); and an R Shiny app for flexible MRTs, FlexiMRT-SS (138). Together, these tools can help researchers determine the appropriate number of participants needed for their specific MRT design, ensuring that the study is neither underpowered (too few participants to detect effects) nor wasteful of resources (too many participants). ...
Article
This review explores the transformative potential of just-in-time adaptive interventions (JITAIs) as a scalable solution for addressing health disparities in underserved populations. JITAIs, delivered via mobile health technologies, could provide context-aware personalized interventions based on real-time data to address public health challenges such as addiction, chronic disease, and mental health management. JITAIs can dynamically adjust intervention strategies, enhancing accessibility and engagement for marginalized communities. We highlight the utility of JITAIs in reducing opportunity costs associated with traditional in-person health interventions. Examples from various health domains demonstrate the adaptability of JITAIs in tailoring interventions to meet diverse needs. The review also emphasizes the need for community involvement, robust evaluation frameworks, and ethical considerations in implementing JITAIs, particularly in low- and middle-income countries. Sustainable funding models and technological innovations are necessary to ensure equitable access and effectively scale these interventions. By bridging the gap between research and practice, JITAIs could improve health outcomes and reduce disparities in vulnerable populations.
Article
Full-text available
Given the increasing demand for online learning at the tertiary level, there currently exists a need to modify or develop instructional design (ID) models/approaches that can effectively facilitate the collaboration between learning designers and teachers, as well as to research the effectiveness of these models/approaches. Against this backdrop, adopting a design-based research approach, we tested a practical ID approach that is developed on two prior models: rapid prototyping and collaborative course development. Accordingly, a 2-week rapid development studio—an agile, intensive, iterative ID process—was arranged. Data from multiple sources were gleaned during the study to generate a comprehensive and in-depth understanding of the proposed approach. Overall, results suggest that the approach is effective for developing online courses in case of a limited time frame and was positively perceived by both course instructors and learning designers. Moreover, practical tips for replicating the process in other contexts are also shared. It is our hope that the study will stimulate further exploration of alternative ID models/approaches to improve online course design efficacy in other higher education institutions.
Article
Full-text available
Background Low physical activity is an important risk factor for common physical and mental disorders. Physical activity interventions delivered via smartphones can help users maintain and increase physical activity, but outcomes have been mixed. Purpose Here we assessed the effects of sending daily motivational and feedback text messages in a microrandomized clinical trial on changes in physical activity from one day to the next in a student population. Methods We included 93 participants who used a physical activity app, “DIAMANTE” for a period of 6 weeks. Every day, their phone pedometer passively tracked participants’ steps. They were microrandomized to receive different types of motivational messages, based on a cognitive-behavioral framework, and feedback on their steps. We used generalized estimation equation models to test the effectiveness of feedback and motivational messages on changes in steps from one day to the next. Results Sending any versus no text message initially resulted in an increase in daily steps (729 steps, p = .012), but this effect decreased over time. A multivariate analysis evaluating each text message category separately showed that the initial positive effect was driven by the motivational messages though the effect was small and trend-wise significant (717 steps; p = .083), but not the feedback messages (−276 steps, p = .4). Conclusion Sending motivational physical activity text messages based on a cognitive-behavioral framework may have a positive effect on increasing steps, but this decreases with time. Further work is needed to examine using personalization and contextualization to improve the efficacy of text-messaging interventions on physical activity outcomes. ClinicalTrials.gov Identifier NCT04440553.
Article
Full-text available
Background: Social distancing is a crucial intervention to slow down person-to-person transmission of COVID-19. However, social distancing has negative consequences, including increases in depression and anxiety. Digital interventions, such as text messaging, can provide accessible support on a population-wide scale. We developed text messages in English and Spanish to help individuals manage their depressive mood and anxiety during the COVID-19 pandemic. Objective: In a two-arm randomized controlled trial, we aim to examine the effect of our 60-day text messaging intervention. Additionally, we aim to assess whether the use of machine learning to adapt the messaging frequency and content improves the effectiveness of the intervention. Finally, we will examine the differences in daily mood ratings between the message categories and time windows. Methods: The messages were designed within two different categories: behavioral activation and coping skills. Participants will be randomized into (1) a random messaging arm, where message category and timing will be chosen with equal probabilities, and (2) a reinforcement learning arm, with a learned decision mechanism for choosing the messages. Participants in both arms will receive one message per day within three different time windows and will be asked to provide their mood rating 3 hours later. We will compare self-reported daily mood ratings; self-reported depression, using the 8-item Patient Health Questionnaire; and self-reported anxiety, using the 7-item Generalized Anxiety Disorder scale at baseline and at intervention completion. Results: The Committee for the Protection of Human Subjects at the University of California Berkeley approved this study in April 2020 (No. 2020-04-13162). Data collection began in April 2020 and will run to April 2021. As of August 24, 2020, we have enrolled 229 participants. We plan to submit manuscripts describing the main results of the trial and results from the microrandomized trial for publication in peer-reviewed journals and for presentations at national and international scientific meetings. Conclusions: Results will contribute to our knowledge of effective psychological tools to alleviate the negative effects of social distancing and the benefit of using machine learning to personalize digital mental health interventions. Trial Registration: ClinicalTrials.gov NCT04473599; https://clinicaltrials.gov/ct2/show/NCT04473599
Article
Full-text available
Introduction Depression and diabetes are highly disabling diseases with a high prevalence and high rate of comorbidity, particularly in low-income ethnic minority patients. Though comorbidity increases the risk of adverse outcomes and mortality, most clinical interventions target these diseases separately. Increasing physical activity might be effective to simultaneously lower depressive symptoms and improve glycaemic control. Self-management apps are a cost-effective, scalable and easy access treatment to increase physical activity. However, cutting-edge technological applications often do not reach vulnerable populations and are not tailored to an individual’s behaviour and characteristics. Tailoring of interventions using machine learning methods likely increases the effectiveness of the intervention. Methods and analysis In a three-arm randomised controlled trial, we will examine the effect of a text-messaging smartphone application to encourage physical activity in low-income ethnic minority patients with comorbid diabetes and depression. The adaptive intervention group receives messages chosen from different messaging banks by a reinforcement learning algorithm. The uniform random intervention group receives the same messages, but chosen from the messaging banks with equal probabilities. The control group receives a weekly mood message. We aim to recruit 276 adults from primary care clinics aged 18–75 years who have been diagnosed with current diabetes and show elevated depressive symptoms (Patient Health Questionnaire depression scale-8 (PHQ-8) >5). We will compare passively collected daily step counts, self-report PHQ-8 and most recent haemoglobin A1c from medical records at baseline and at intervention completion at 6-month follow-up. Ethics and dissemination The Institutional Review Board at the University of California San Francisco approved this study (IRB: 17-22608). We plan to submit manuscripts describing our user-designed methods and testing of the adaptive learning algorithm and will submit the results of the trial for publication in peer-reviewed journals and presentations at (inter)-national scientific meetings. Trial registration number NCT03490253 ; pre-results.
Article
Full-text available
Multiarm clinical trials, which compare several experimental treatments against control, are frequently recommended due to their efficiency gain. In practise, all potential treatments may not be ready to be tested in a phase II/III trial at the same time. It has become appealing to allow new treatment arms to be added into on‐going clinical trials using a “platform” trial approach. To the best of our knowledge, many aspects of when to add arms to an existing trial have not been explored in the literature. Most works on adding arm(s) assume that a new arm is opened whenever a new treatment becomes available. This strategy may prolong the overall duration of a study or cause reduction in marginal power for each hypothesis if the adaptation is not well accommodated. Within a two‐stage trial setting, we propose a decision‐theoretic framework to investigate when to add or not to add a new treatment arm based on the observed stage one treatment responses. To account for different prospect of multiarm studies, we define utility in two different ways; one for a trial that aims to maximise the number of rejected hypotheses; the other for a trial that would declare a success when at least one hypothesis is rejected from the study. Our framework shows that it is not always optimal to add a new treatment arm to an existing trial. We illustrate a case study by considering a completed trial on knee osteoarthritis.
Preprint
Full-text available
There is a growing interest in leveraging the prevalence of mobile technology to improve health by delivering momentary, contextualized interventions to individuals' smartphones. A just-in-time adaptive intervention (JITAI) adjusts to an individual's changing state and/or context to provide the right treatment, at the right time, in the right place. Micro-randomized trials (MRTs) allow for the collection of data which aid in the construction of an optimized JITAI by sequentially randomizing participants to different treatment options at each of many decision points throughout the study. Often, this data is collected passively using a mobile phone. To assess the causal effect of treatment on a near-term outcome, care must be taken when designing the data collection system to ensure it is of appropriately high quality. Here, we make several recommendations for collecting and managing data from an MRT. We provide advice on selecting which features to collect and when, choosing between "agents" to implement randomization, identifying sources of missing data, and overcoming other novel challenges. The recommendations are informed by our experience with HeartSteps, an MRT designed to test the effects of an intervention aimed at increasing physical activity in sedentary adults. We also provide a checklist which can be used in designing a data collection system so that scientists can focus more on their questions of interest, and less on cleaning data.
Article
Recent scholarship argues that experimentation should be the organizing principle for entrepreneurial strategy. Experimentation leads to organizational learning, which drives improvements in firm performance. We investigate this proposition by exploiting the time-varying adoption of A/B testing technology, which has drastically reduced the cost of testing business ideas. Our results provide the first evidence on how digital experimentation affects a large sample of high-technology start-ups using data that tracks their growth, technology use, and products. We find that, although relatively few firms adopt A/B testing, among those that do, performance improves by 30%–100% after a year of use. We then argue that this substantial effect and relatively low adoption rate arises because start-ups do not only test one-off incremental changes, but also use A/B testing as part of a broader strategy of experimentation. Qualitative insights and additional quantitative analyses show that experimentation improves organizational learning, which helps start-ups develop more new products, identify and scale promising ideas, and fail faster when they receive negative signals. These findings inform the literatures on entrepreneurial strategy, organizational learning, and data-driven decision making. This paper was accepted by Toby Stuart, entrepreneurship and innovation.
Article
Advances in wearables and digital technology now make it possible to deliver behavioral mobile health interventions to individuals in their everyday life. The micro-randomized trial is increasingly used to provide data to inform the construction of these interventions. In a micro-randomized trial, each individual is repeatedly randomized among multiple intervention options, often hundreds or even thousands of times, over the course of the trial. This work is motivated by multiple micro-randomized trials that have been conducted or are currently in the field, in which the primary outcome is a longitudinal binary outcome. The primary aim of such micro-randomized trials is to examine whether a particular time-varying intervention has an effect on the longitudinal binary outcome, often marginally over all but a small subset of the individual’s data. We propose the definition of causal excursion effect that can be used in such primary aim analysis for micro-randomized trials with binary outcomes. Under rather restrictive assumptions one can, based on existing literature, derive a semiparametric, locally efficient estimator of the causal effect. Starting from this estimator, we develop an estimator that can be used as the basis of a primary aim analysis under more plausible assumptions. Simulation studies are conducted to compare the estimators. We illustrate the developed methods using data from the micro-randomized trial, BariFit. In BariFit, the goal is to support weight maintenance for individuals who received bariatric surgery.
Article
Mobile health is a rapidly developing field in which behavioral treatments are delivered to individuals via wearables or smartphones to facilitate health-related behavior change. Micro-randomized trials (MRT) are an experimental design for developing mobile health interventions. In an MRT the treatments are randomized numerous times for each individual over course of the trial. Along with assessing treatment effects, behavioral scientists aim to understand between-person heterogeneity in the treatment effect. A natural approach is the familiar linear mixed model. However, directly applying linear mixed models is problematic because potential moderators of the treatment effect are frequently endogenous-that is, may depend on prior treatment. We discuss model interpretation and biases that arise in the absence of additional assumptions when endogenous covariates are included in a linear mixed model. In particular, when there are endogenous covariates, the coefficients no longer have the customary marginal interpretation. However, these coefficients still have a conditional-on-the-random-effect interpretation. We provide an additional assumption that, if true, allows scientists to use standard software to fit linear mixed model with endogenous covariates, and person-specific predictions of effects can be provided. As an illustration, we assess the effect of activity suggestion in the HeartSteps MRT and analyze the between-person treatment effect heterogeneity.
Article
The sequential multiple assignment randomized trial (SMART) is a design used to develop dynamic treatment regimes (DTRs). Given that DTRs are generally less well researched, pilot SMART studies are often necessary. One challenge in pilot SMART is to determine the sample size such that it is small yet meaningfully informative for future full‐fledged SMART. Here, we develop a precision‐based approach, where the calculated sample size confines the marginal mean outcome of a DTR within a prespecified margin of error. The sample size calculations will be presented for two‐stage SMARTs, and for various common outcome types.