Noninferiority and equivalence tests in sequential, multiple assignment, randomized trials (SMARTs)

Adaptive interventions (AIs) are increasingly popular in the behavioral sciences. An AI is a sequence of decision rules that specify for whom and under what conditions different intervention options should be offered, in order to address the changing needs of individuals as they progress over time. The sequential, multiple assignment, randomized trial (SMART) is a novel trial design that was developed to aid in empirically constructing effective AIs. The sequential randomizations in a SMART often yield multiple AIs that are embedded in the trial by design. Many SMARTs are motivated by scientific questions pertaining to the comparison of such embedded AIs. Existing data analytic methods and sample size planning resources for SMARTs are suitable only for superiority testing, namely for testing whether one embedded AI yields better primary outcomes on average than another. This calls for noninferiority/equivalence testing methods, because AIs are often motivated by the need to deliver support/care in a less costly or less burdensome manner, while still yielding benefits that are equivalent or noninferior to those produced by a more costly/burdensome standard of care. Here, we develop data-analytic methods and sample-size formulas for SMARTs testing the noninferiority or equivalence of one AI over another. Sample size and power considerations are discussed with supporting simulations, and online resources for sample size planning are provided. A simulated data analysis shows how to test noninferiority and equivalence hypotheses with SMART data. For illustration, we use an example from a SMART in the area of health psychology aiming to develop an AI for promoting weight loss among overweight/obese adults. (PsycINFO Database Record (c) 2019 APA, all rights reserved)