Using Population Reach as a Proxy Metric for Intervention Impact to Prioritize Selection of Obesity Prevention Strategies in Los Angeles County, 2010-2012
ABSTRACT Recent federal initiatives have used estimates of population reach as a proxy metric for intervention impact, in part to inform resource allocation and programmatic decisions about competing priorities in the community. However, in spite of its utility, population reach as a singular metric of intervention impact may be insufficient for guiding multifaceted program decisions. A more comprehensive, validated approach to measure or forecast dose may complement reach estimates to inform decision makers about optimal ways to use limited resources. (Am J Public Health. Published online ahead of print May 15, 2014: e1-e6. doi:10.2105/AJPH.2014.301979).
SourceAvailable from: psychologymatters.org[Show abstract] [Hide abstract]
ABSTRACT: In a well-designed experiment, random assignment of participants to treatments makes causal inference straightforward. However, if participants are not randomized (as in observational study, quasi-experiment, or nonequivalent control-group designs), group comparisons may be biased by confounders that influence both the outcome and the alleged cause. Traditional analysis of covariance, which includes confounders as predictors in a regression model, often fails to eliminate this bias. In this article, the authors review Rubin's definition of an average causal effect (ACE) as the average difference between potential outcomes under different treatments. The authors distinguish an ACE and a regression coefficient. The authors review 9 strategies for estimating ACEs on the basis of regression, propensity scores, and doubly robust methods, providing formulas for standard errors not given elsewhere. To illustrate the methods, the authors simulate an observational study to assess the effects of dieting on emotional distress. Drawing repeated samples from a simulated population of adolescent girls, the authors assess each method in terms of bias, efficiency, and interval coverage. Throughout the article, the authors offer insights and practical guidance for researchers who attempt causal inference with observational data.Psychological Methods 01/2009; 13(4):279-313. DOI:10.1037/a0014268 · 4.45 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: We provide an overview of the Kaiser Permanente Community Health Initiative--created in 2003 to promote obesity-prevention policy and environmental change in communities served by Kaiser Permanente-and describe the design for evaluating the initiative. The Initiative focuses on 3 ethnically diverse northern California communities that range in size from 37,000 to 52,000 residents. The evaluation assesses impact by measuring intermediate outcomes and conducting pre- and posttracking of population-level measures of physical activity, nutrition, and overweight.American Journal of Public Health 11/2010; 100(11):2111-3. DOI:10.2105/AJPH.2010.300001 · 4.23 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: Despite growing support among public health researchers and practitioners for environmental approaches to obesity prevention, there is a lack of empirical evidence from intervention studies showing a favorable impact of either increased healthy food availability on healthy eating or changes in the built environment on physical activity. It is therefore critical that we carefully evaluate initiatives targeting the community environment to expand the evidence base for environmental interventions. We describe the approaches used to measure the extent and impact of environmental change in 3 community-level obesity-prevention initiatives in California. We focus on measuring changes in the community environment and assessing the impact of those changes on residents most directly exposed to the interventions.American Journal of Public Health 11/2010; 100(11):2129-36. DOI:10.2105/AJPH.2010.300002 · 4.23 Impact Factor