The most common finding of any study of an intervention in education is that of ?treatment effect heterogeneity?. Schools use programs in different ways, for different purposes, and to different results. Understanding these differences is important for the practical goals of helping schools understand where and when specific actions will be beneficial and developing additional supports to help schools be successful. It is also important for the scientific goal of validating a program?s theory of action. Typical studies are on too small a scale in order to truly explore these different results across schools. The recent release of national school level test data by the Stanford Data Education Archives, along with the vast scale of Leader in Me (Lim), provides an opportunity to explore the different ways that schools use Lim and how this leads to different outcomes. Underpinning this exploration is a nationwide quasi-experiment that estimates the impact of Lim in every school in the USA that has adopted Lim. Using these estimates, we can then explore at least three questions of interest. First, working with FranklinCovey Education?s research team, we can identify specific changes to the LiM process or specific actions taken by schools and explore the impact of these changes/actions on school?s academic growth. Second, given the role of the Measurable Results Assessment (MRA) as a tool for schools to understand the impact of Lim, we can validate the MRA by examining whether schools that make gains on MRA measurables also show greater academic growth than expected. Last, we can identify schools that appear to be effectively engaging in the Lim process, but who are not making academic gains. Qualitatively exploring these schools will help illuminate ways that FranklinCovey can design new supports for schools.