PreprintPDF Available

Who supports peace with the FARC? A sensitivity-based approach under imperfect identification

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

What causes some civilians to support peace while others do not after violent conflict? The 2016 referendum for a peace agreement with the FARC in Colombia has spawned a growing literature studying determinants of support for peace, focusing largely on the effects of (i) prior exposure to violence and (ii) political affiliation with the deal's champion. However, as with many substantively important questions regarding real world effects, observational studies are unable to rule out confounding, leaving defensible causal claims beyond reach. We demonstrate what progress can be made in these circumstances by a sensitivity-based approach, which shifts away from arguing whether an effect "is identified'' (i.e. that confounding bias is exactly zero) to instead evaluate and discuss precisely how strong confounding would need to be to alter the study's conclusions. Employing newly available sensitivity analysis tools for linear regression, we find that the relationship between exposure to violence and support for peace can be overturned by even very weak confounders. By contrast, the relationship between prior political affiliation with the deal's champion and support for peace would require very powerful confounding to explain away. We also show how sensitivity analyses can be conducted using only the published regression results of prior studies, producing similar conclusions. Beyond this case, we argue that wider adoption of the sensitivity-based approach would facilitate greater transparency, improve productive scrutiny for both readers and reviewers, and facilitate valid investigation of important questions for which identification is imperfect.
Content may be subject to copyright.
Who supports peace with the FARC?
A sensitivity-based approach under imperfect identification
Chad Hazlett Francesca Parente
Abstract
What causes some civilians to support peace while others do not after violent conflict? The
2016 referendum for a peace agreement with the FARC in Colombia has propelled a growing body
of work on the determinants of support for peace, focusing principally in this case on the effects of
(i) prior exposure to violence and (ii) political affiliation with the deal’s champion. However, as
with many substantively important questions regarding real world effects, observational studies
are unable to rule out confounding, leaving causal claims difficult to defend. We demonstrate
what progress can be made in these circumstances by a sensitivity-based approach, which shifts
away from arguing whether an effect “is identified” (i.e. that confounding bias is exactly zero)
to instead evaluate and discuss precisely how strong confounding would need to be to alter the
study’s conclusions. Employing newly available sensitivity analysis tools for linear regression, we
find that the relationship between exposure to violence and support for peace can be overturned
by even very weak confounders. By contrast, the relationship between prior political affiliation
with the deal’s champion and support for peace would require powerful confounding to explain
away. We also show how sensitivity analyses can be conducted using published regression results
of prior studies, to similar conclusions. Beyond this case, we argue that wider adoption of a
sensitivity-based approach would facilitate greater transparency, improve productive scrutiny for
both readers and reviewers, and facilitate valid investigation of important questions for which
assurances of zero-confounding remain out of reach.
Chad Hazlett (chazlett@ucla.edu, corresponding) is Assistant Professor of Statistics and Political Science at the
University of California, Los Angeles. Francesca Parente (fparente@ucla.edu) is a Post-Doctoral Fellow at the Niehaus
Center for Globalization and Governance at Princeton University. We thank Neal Beck, Matt Blackwell, Darin
Christensen, Carlos Cinelli, Jeff Gill, Jens Hainmueller, Erin Hartman, Leslie Johns, Jeff Lewis, Aila Matanock,
Matto Mildenberger, Lauren Peritz, Michael Ross, Leah Stokes, Teppei Yamamoto, and participants from political
methodology seminars at UC Santa Barbara and MIT. Thanks to Fernando Mello for the literature review citing how
infrequently sensitivity analyses are employed. All errors are our own.
1
1 Introduction
What influences support for peace to end protracted civil conflicts? This question has been
studied in the context of numerous recent and ongoing conflicts, including Bosnia (Hadzic, Carl-
son and Tavits,2017), Syria (Fabbe, Hazlett and Sınmazdemir,2019), Sudan (Hazlett,2019),
and Israel-Palestine (Grossman, Manekin and Miodownik,2015;Hirsch-Hoefler et al.,2016;
Canetti et al.,2017). More recently, the 2016 nationwide referendum seeking to end 50 years of
conflict with the Revolutionary Armed Forces of Colombia (FARC) has spurred a burst in aca-
demic activity on the question of support for peace in Colombia (e.g. Krause,2017;Matanock
and Garcia-S´anchez,2017;avalos et al.,2018;Liendo and Braithwaite,2018;Matanock and
Garbiras-D´ıaz,2018;Matanock, Garbiras-D´ıaz and Garc´ıa-S´anchez,2018;Branton et al.,2019;
Gallego et al.,2019;Pechenkina and Gamboa,2019;Tellez,2019a,b), with scholars two major
influences on voting behavior: (1) exposure to violence and (2) political affiliation with the deal’s
champion, President Santos.
However, making credible inferences has remained a challenge in this area. While Matanock
and Garbiras-D´ıaz (2018) and Matanock, Garbiras-D´ıaz and Garc´ıa-S´anchez (2018) employ
survey/endorsement experiments to find evidence consistent with the political affiliation ex-
planation, the remainder focus on observational data. Observational approaches are essential
to learning about real events with real outcomes or attempting to understand what actually
happened in important historical moments. This realism, however, comes at a hefty cost: the
inability to credibly estimate the causal effect of violence, political affiliation, or any other factor
on behavior. As researchers are not able to rule out unobserved confounding as the source of
the associations found in these studies, the results may be labeled as “suggestive” or “consis-
tent with theory.” But what can we really learn from such observational studies given that
unobserved confounding can falsely wash out or generate effect estimates in either direction?1
In this article we describe and illustrate the value of sensitivity-based approaches for making
inferences about determinants of support for the FARC peace deal. Conventional analyses pro-
duce an estimated effect as if we believe there is zero confounding bias, though we often know
this is unlikely. By contrast, a sensitivity-based approach reveals what degree of confounding
1While we may occasionally find compelling identification opportunities that can be used with observational data
from real world events, we have yet to find a convincing research design to rule out confounding in the case of the
FARC referendum.
1
would be required to substantively alter our conclusions. Often of particular interest is the
question of whether the results are so fragile in the face of unobserved confounding as to require
heroic assumptions to defend. These analyses can also aid the discipline by providing standard-
ized, precise, and informative measures of robustness against unobserved confounding, which
improves upon the range of ad hoc procedures investigators sometimes employ in an effort to ad-
dress robustness concerns. While numerous useful sensitivity analyses exist, we employ a recent
proposal by Cinelli and Hazlett (2020) because it studies the sensitivity of regression results (as
used in several of these studies and our own analyses), places no additional assumptions on the
distribution of confounders or specification of the treatment assignment (as some methods do),
provides useful summary statistics for transparent reporting, and corrects methodological issues
with several existing methods. Once investigators determine the degree of confounding required
to substantively alter their conclusions, this may suggest (i) that the initial estimate can be
over-turned by far weaker confounding than can be credibly ruled out; (ii) that it would take a
confounder stronger than what is arguably plausible to alter the conclusion; or (iii) something
in between. In every case, however, sensitivity analyses are tools that reveal and communicate
what assumptions must be believed in order to sustain a causal claim.
We find that regressions can easily be constructed in which the coefficient on exposure to
violence is statistically and substantively significant—passing the usual bar for publication— but
the result is fragile to unobserved confounding. One output of the sensitivity-based approach
is a quantity called the robustness value, which provides a meaningful way to evaluate and
report a result’s fragility to unobserved confounders by offering a single number that is easily
computed and can be added to a regression table. The robustness value for the estimated
effect of exposure to violence indicates that confounding explaining just 0.4% of the residual
variance in violence and in support for peace implies that the (de-biased) effect would fall below
conventional statistical significance at the 5% level. Such confounding is very difficult to rule
out in a context where the treatment was far from randomized. By contrast, the estimated
effect of political affiliation (with President Santos, the deal’s champion) has a robustness value
of over 60%, meaning confounding explaining any less than 60% of the residual variation in
political affiliation and support for the referendum would not alter our conclusions. Further, we
see that even if unobserved confounding many times stronger than GDP per capita, for example,
were present, this would barely change the estimate. Importantly, such analyses do not aim to
2
permanently settle the issue of causality, but rather to raise the bar for debate by transparently
reporting what is required of proposed confounding if it is to alter our conclusion.
In what follows, we first describe existing literature on the Colombian case, including the
conclusions others have drawn and how they accounted for potential confounding (Section 2).
In Section 3, we provide the essential methodological elements of the sensitivity analysis we will
employ in Section 4. Section 5discusses results from this case as well as broader implications of
such a sensitivity-based approach for how research is conducted, communicated, and criticized
under imperfect identification.
2 The FARC referendum: What we (do not) know
In the numerous papers published on what influenced voting in the 2016 FARC referendum,
scholars have emphasized two possible causes of support: exposure to FARC-related violence
and political affiliation with President Santos. Both explanations are consistent with a vast
theoretical and empirical literature on the effects of violence on attitudes and political behavior
(Weintraub, Vargas and Flores,2015;Bauer et al.,2016, among others) and the power of
elites in directing voting behavior or influencing opinions on policies (e.g. Zaller,1992;Lupia,
1994;Levendusky,2010;Nicholson,2012;Druckman, Peterson and Slothuus,2013;Guisinger
and Saunders,2017). We briefly review the principal empirical findings of these papers before
discussing how they deal with potential confounding concerns.
First, scholars have argue that exposure to FARC violence increased support for peace.2
For example, Tellez (2019b) finds that citizens in municipalities labelled “conflict zones” by the
government were more likely to report that they supported the peace process and concessions to
FARC in AmericasBarometer surveys. Using a measure of violence per capita at the municipal
level, Branton et al. (2019) find that municipalities exposed to more violence were more in favor
of the peace deal. avalos et al. (2018) find a similar result using cumulative counts of victims
of FARC violence and internally displaced persons.
2We note – as did some journalists – that the relationship could also have gone in the opposite direction. For
example, according to the BBC, “In ... Casanare... 71.1% voted against the deal. It is an area where farmers
and landowners have for years been extorted by the FARC and other illegal groups” (BBC News,2016). Again, a
theoretical literature would be consistent with this result. For example, Petersen and Daly (2010) stress the role of
anger and emotions in determining attitudes toward peace, with exposure to violence making victims less likely to
support reconciliation. However, the published results on this topic have focused on the positive relationship between
exposure to violence and support for peace.
3
Second, scholars have found that greater support for the peace deal is associated with greater
support for the deal’s champion, President Santos. Krause (2017), Branton et al. (2019) and
avalos et al. (2018) all find that the municipal vote share for President Santos in the 2014
presidential election is a strong predictor of how the municipality voted on the peace deal in
2016.3
This group of observational studies on the FARC referendum would likely be characterized
by scholars as at least “suggestive of” or “consistent with” claims that exposure to violence
and political affiliation are causes of support for the peace deal. Yet – however qualified – such
claims remain problematic. Without further arguments, confounding may influence findings
in either direction and with unknown magnitude, making them consistent not only with the
claimed directional effects but with null effects or effects in the opposite direction. Concerns
over confounding have been noted by authors of these studies, who have attempted to address
them by adding control variables.4In none of these cases, however, has controlling for these
observables positioned authors to argue for the absence of remaining, unobserved confounders.
One troubling example of a potential confounder we cannot observe is “latent sympathy for the
FARC.” Suppose that sympathetic areas are those that (1) tend to have a particular political
leaning; (2) have populations more supportive of the deal (because it is seen as favorable to the
FARC); and (3) are areas where the FARC refrains from committing violence against civilians.
Such an arrangement would confound both the violence and political affiliation accounts in the
observed directions. Thus while covariate-adjustment approaches (be they regression, matching,
weighting, or any other) are sensible starting points, they have not allowed us to rule out
confounding as the source of the observed relationships in the studies above. Moreover, neither
the statistical significance of these results nor their consistency across multiple studies tells us
3Going beyond observational work on the FARC vote as it took place, these results are corroborated by survey
experiments that test the power of elite endorsements on attitudes toward peace (Matanock and Garcia-S´anchez,2017;
Matanock and Garbiras-D´ıaz,2018;Matanock, Garbiras-D´ıaz and Garc´ıa-S´anchez,2018) as well as a 2014 survey in
the field that found people’s attitudes toward peace were shaped more by political preference than experience with
violence (Liendo and Braithwaite,2018).
4For example, Branton et al.,2019 (pg. 6) write: “In addition to the primary variables of interest, the model
also includes several potentially confounding municipal-level social and economic demographic factors”, including
the percentages of rural population, adults between the ages of 20 and 39, white voters, and female voters in each
municipality, a measure of government spending per municipality, and a measure of infant mortality per municipality.
Liendo and Braithwaite (2018) note the importance of accounting for certain confounding traits because “the conflict
has not affected civilians equally across the lines of ethnicity, gender, age, and socioeconomic conditions” (pg. 629).
Tellez (2019b) includes “a number of controls in the base models that might confound inference”, which include
respondent age, gender, monthly household income, education level, and level of trust in the national government, as
well as municipal-level controls for support for the opposition party in 2010 (pgs. 1063–1065).
4
how sensitive they are in the face of potential confounders.
3 Sensitivity to unobserved confounding
The assumptions required of a causal identification strategy that asserts zero bias are often
indefensible, as in the case of determining support for the FARC agreement. Investigators in
these circumstances can still take identification concerns seriously by exploring the estimate they
would obtain under confounding of varying postulated degrees using sensitivity analyses. To this
end, sensitivity analyses have been proposed and employed since at least Cornfield et al. (1959),
with more recent work including Rosenbaum and Rubin (1983); Heckman et al. (1998); Robins
(1999); Frank (2000); Rosenbaum (2002); Imbens (2003); Brumback et al. (2004); Altonji, Elder
and Taber (2005); Hosman, Hansen and Holland (2010); Imai et al. (2010); Vanderweele and
Arah (2011); Blackwell (2013); Frank et al. (2013); Dorie et al. (2016); Middleton et al. (2016);
VanderWeele and Ding (2017); Oster (2017); Franks, D’Amour and Feller (2019) and Cinelli
and Hazlett (2020). Yet, they are rarely used.5In an important exception, Chaudoin, Hays and
Hicks (2018) also advocate for sensitivity analysis in political science, showing that over a third
of the regression studies they replicate produce an (almost certainly) false estimated effect of
membership in the World Trade Organization on cancer. They also demonstrate, central to our
discussion here, that theoretical or domain knowledge is required to move from the mechanical
results of sensitivity analyses to meaningful claims of robustness.
We employ a framework for sensitivity analyses recently developed by Cinelli and Hazlett
(2020), which elaborates on the familiar concept of omitted variable bias. Suppose the inves-
tigator wishes to see estimates from regressing an outcome (Y) on a treatment (D), covariates
(X), and additional covariate (Z) as in
Y= ˆτD +Xˆ
β+ ˆγZ + ˆfull (1)
5Out of 164 quantitative papers published in 2017 in the top three general interest political science journals (Amer-
ican Political Science Review,American Journal of Political Science, and Journal of Politics), 64 papers explicitly
described a causal identification strategy other than a randomized experiment, of which only 4 (6%) formally examined
sensitivity to unobserved confounding.
5
However, the variable Zis unobserved, so the “restricted” regression actually estimated is
Y= ˆτresD+Xˆ
βres + ˆres.(2)
The central question is how the observed estimate (ˆτres) differs from the desired one, ˆτ. We
thus define d
bias := ˆτres ˆτ, the difference between the estimate actually obtained and what
would have been obtained in the same sample had the missing covariate Zbeen included. The
use of ˆ
·here reminds us that this is the difference between two sample quantities, and not the
difference between a sample quantity and an expectation.
As derived in Cinelli and Hazlett (2020), the bias due to omission of Zcan be written as
|d
bias|= se(ˆτres)v
u
u
tR2
YZ|X,D R2
DZ|X
1R2
DZ|X
(df),(3)
where R2
YZ|X,D is the share of variance in the outcome explained by Z, after accounting for
both Xand D.R2
DZ|Xis the variance in the treatment status explained by confounding, after
accounting for the observed covariates. Further, the standard error of the coefficient one would
have estimated had Zbeen included is given by
bse(ˆτ) = se(ˆτres)v
u
u
t1R2
YZ|X,D
1R2
DZ|Xdf
df 1.(4)
Thus, the two parameters R2
YZ|X,D and R2
DZ|Xjointly characterize all that we need to know
about confounding in order to determine the point estimate, standard error, t-statistic, or p-
value we would obtain had such confounding been present. To make claims on these two terms
is to invoke knowledge about the treatment assignment and outcome-determining processes, as
we demonstrate in our application.6As detailed in Cinelli and Hazlett (2020), the confounding
described by these parameters may be the combined result of many confounding variables. We
emphasize that sensitivity tools can only speak to how the coefficient on Dchanges due to the
inclusion of some hypothesized Z. Whether the investigator should be interested in the value of
ˆτ, i.e. the regression that includes Z, is up to the investigator and to the hypothesized variable
6In principle, one may consider biases that either inflate or reduce the estimated effect relative to the target estimate.
Here we are concerned that a research conclusion may falsely support a proposed effect, and are thus interested in
protecting against confounding that could falsely inflate the magnitude of the estimated effect in the observed direction.
6
Z.7
One way to use these adjustments is to employ contour plots explicitly showing how estimated
coefficients or t-statistics would look under varying levels of postulated confounding. We also
call upon two summary statistics to quickly characterize the fragility of a result in the face of
unobserved confounding. The first is the partial R2of the treatment with the outcome, having
accounted for control variables, R2
YD|X. Beyond quantifying the explanatory power of the
treatment over the outcome, this value has a sensitivity interpretation as an “extreme scenario”
analysis: If we assume that confounders explain 100% of the residual variance of the outcome,
the R2
YD|Xtells us how much of the residual variance in the treatment such confounders would
need to explain to bring the estimated effect down to zero. The R2
YD|Xcan also be computed
for already-published OLS results, because it requires only the t-statistic for the treatment
coefficient and degrees of freedom from a regression, R2
YD|X=t2
D
t2
D+dof .
The second summary quantity is the robustness value (RV ). Confounding that explains at
least RV % of residual variance in the treatment and in the outcome would reduce the implied
estimate to zero. That is, if both R2
YZ|X,D and R2
DZ|Xexceed the RV , then the effect would
be reduced to zero or beyond. If both R2
YZ|X,D and R2
DZ|Xare less than the RV , then
we know confounding is not sufficient to eliminate the effect. This makes the RV a single
dimensional summary of overall sensitivity. This quantity can also be computed from standard
regression statistics.8Similarly, we may wish to summarize the amount of confounding such
that the 1 αconfidence interval would no longer exclude a particular null value. For example,
if confounding explains RVα=0.05% of both the treatment and outcome, it reduces the adjusted
effect to the point where the 95% confidence interval would just include zero.9
Finally, benchmarking tools allow us to find bounds on the amount of confounding bias that
is possible, based on assumptions about how unobserved confounding compares to one or more
observed covariates. For example, consider the assumption that whatever confounding may
7The conditions by which including Zidentifies a causal quantity are well established, albeit with different termi-
nology in different traditions. In the language of structural causal models (SCMs) or directed acylic graphs (DAGs),
we commonly require that {X, Z}blocks all backdoor paths between Dand Y, without opening new paths (by con-
ditioning on colliders) or including post-treatment variables (Pearl,2009). In the language of potential outcomes, we
require that the potential outcomes at all levels of treatment are independent of the realized treatment assignment Di
conditionally on {X, Z }, i.e. Yi(d)Di|{Xi, Zi}, often called (conditional) ignorability or selection on observables.
8Let fDbe Cohen’s partial ffor the treatment variable, which can be obtained as tD/dof. Then RV =
0.5(pf4
D+ (4f2
D)f2
D).
9More generally, the RVq,α gives the amount of confounding required such that an effect estimate reduced by the
fraction q(e.g. 50%) would fall just within the confidence interval.
7
exist, it could not explain more of the residual variation in treatment (e.g. political affiliation)
and in the outcome (support for the peace deal) than does GDP per capita. More generally,
we could argue that confounding explains no more than ktimes as much of the treatment and
outcome residual variances than does GDP per capita. Such assumptions mathematically imply
bounds on the degree of confounding that can remain, and thus on bias. This aids, first, in
understanding the magnitude of confounding required to change an answer by restating it in
terms of observed covariates, for which we have stronger intuitions regarding the strength of
relationship with treatment and outcome. Further, if users are able to employ their domain
knowledge and information about treatment assignment to construct a defensible assumption
of this type, and the result “holds,” this can be compelling evidence for the credibility of
the research conclusion. At the other extreme, if one makes an assumption that risks being
optimistic, yet the resulting bound still does not “protect” the estimate against confounding
that would alter the main conclusions, a study’s result will be difficult to defend with confidence.
4 Analyses
4.1 Data
To test proposed explanations of support for the peace deal, we combine municipality-level
support for the FARC referendum with measures of exposure to violence and political affiliation.
We measure FARC-related violence using incidences from the Global Terrorism Database, which
has widespread coverage of events worldwide beginning in 1970. We chose this source because the
database attributes attacks to particular groups, which is especially important in the Colombian
context in which multiple guerilla groups were operating at the same time. We counted all
FARC-related fatalities in each municipality per year, from any kind of attack, including, among
others, murders, forced disappearance, and landmines. The exposure to violence measure is the
cumulative number of FARC-related fatalities, grouped in years of five.
For the political affiliation hypothesis, we follow other studies and measure political affiliation
as support for President Santos in the 2014 election, using results from the second round of the
presidential elections. In some specifications, we use the 2010 vote share for President Santos
(second round), since his position on negotiating with the FARC changed in 2012. All of our
8
election results, including the results for the 2016 FARC referendum, are from the Registradur´ıa
Nacional de Colombia.
4.2 Spatial distribution of key variables
We examine our data visually before turning to quantitative analyses. Figure 1illustrates the
distribution of exposure to violence (Panel A), votes for President Santos in the 2014 election
(Panel B), and votes in favor of the referendum for peace (Panel C) by municipality.10 Several
municipalities in the southwestern part of the country (in the departments of Nari˜no and Cauca)
were high on all three measures. In contrast, departments in the Andes Mountains are low on
all three measures. Taken as a whole, exposure to violence (Panel A) seems to be somewhat
similar in distribution to support for the FARC referendum (Panel C). The relationship between
support for Santos (Panel B) and for the FARC referendum (Panel C) appears to be stronger
still. We note that this strong visual relationship reflects that a large portion of the variance in
the outcome is explained by support for Santos, which in the models below appears as a R2
YD|X
of nearly 60%. Recalling that this quantity is itself a useful sensitivity diagnostic, the strength
of this relationship, even as seen graphically here, presages the robustness of the relationship
between support for Santos and for the referendum.
4.3 Is the violence hypothesis defensible?
In addressing exposure to violence as an explanation for support, we consider two models. The
first is a naive, direct comparison based on the simple model:
Model 1: Yi=β0+α(Deathsi,20112015) + i(5)
where Yiis the proportion voting “Yes” in municipality i, and Deathsi,20112015 is the number
of deaths in municipality icommitted by the FARC between 2011 and 2015. The coefficient
αdescribes how the expected support for peace differs when we observe one additional death.
The second model takes the traditional approach of accounting for potentially worrying observed
confounders by including them in the model as controls,
10Maps were generated using the ‘colmaps’ package (Moreno,2015) for creating maps of Colombia in the Rstatistical
language (R Core Team,2019).
9
Figure 1: Spatial Distribution of Key Variables
(A) Exposure to violence (B) Santos vote
(C) FARC referendum
Note: Maps visualizing the municipal-level distribution of: (A) FARC-caused deaths, (B) vote share for Santos in the
2014 election, and (C) vote share in support of the FARC peace deal in the 2016 referendum.
10
Model 2: Yi=β0+α(Deathsi,20112015) + β1(Deathsi,20062010) + β2(Deathsi,20012005)+
β3(P opulationi) + β4(GDP pci) + β5(Santos 2010i) + i(6)
where Deathsi,20062010 and Deathsi,20012005 are the number of deaths in municipality iin the
corresponding time periods, P opulationiis the total number of eligible voters, GDP pciis GDP
per capita, and Santos 2010iis the vote share for President Santos in the 2010 election. We
include the two lagged measures of violence, Deathsi,20062010 and Deathsi,20012005, as a means
of accounting for areas that, for time-invariant reasons, routinely have higher or lower levels of
violence. Indeed, we find that the “effect” of violence appears to fade over time, with only
the most recent five years having a significant effect.11 Finally, we note that there are various
additional ways of formulating these models – adding covariates or removing the lagged violence
variables, for example – that can reduce the estimated effect of violence well below significance.
The poor robustness of the model according to our analyses (below) makes it unsurprising that
we can so easily “ruin the result” by including different covariates. The sensitivity analysis would
serve as a useful warning of the model’s fragility to alternative covariates, had we not been in
a position to include and test them ourselves, as is the case for most readers of most papers.
We proceed with the models favorable to the violence hypothesis for illustrative purposes. If
even these models prove unable to withstand small degrees of confounding, alternative weaker
models would generate even less persuasive results.
Table 1presents results for these regressions together with the sensitivity quantities.12 Con-
ventionally speaking, results for Model 1 suggest a marginally significant relationship between
exposure to violence and support for peace: the coefficient of 0.20 (p=0.06) on violence in 2011-
2015 suggests that each additional observed death increases the expected support for the FARC
peace deal by 0.20 percentage points. Through a conventional lens, Model 2 appears to find
even stronger evidence for an effect of violence, with a t-statistic reaching 2.11 (p=0.035) and a
11In both models, for simplicity, we focus on violence in the 2011-2015 period as the treatment, because more recent
violence more plausibly impacts attitudes. We note that GDP per capita is measured in 2013, which is post-treatment
relative to violence occurring in 2011 and 2012; however, our assumption is that the effect of additional deaths at this
level on GDP per capita is too small to be problematic.
12All sensitivity quantities, as well as contour plots, were generated using the sensemakr package (Cinelli and
Hazlett,2019) for R. These and other analyses can also be performed using an online Shiny app, available at https:
//carloscinelli.shinyapps.io/robustness_value/.
11
substantively large effect.
Table 1: Augmented regression results for violence
Outcome: Vote for peace deal
Treatment: Est. SE t-stat R2
YD|XRV RVα=0.05 df
1. Deaths 2011-2015 0.20 0.11 1.89 0.32% 5.5% 0.0% 1121
2. Deaths 2011-2015 0.61 0.29 2.11 0.40% 6.1% 0.4% 1115
Note: Regression table for estimated effects of exposure to violence, augmented by simple sensitivity statistics
(R2
YD|X,RV , and RVα=0.05). Descriptions of these quantities are given in Section 3.
However, the sensitivity quantities reported alongside regression coefficients and standard
errors in Table 1reveal that these estimates are extremely fragile in the face of even small
opportunities for unobserved confounding. In Model 1, the observed relationship between deaths
and support for peace was already only marginally significant at p= 0.06. The robustness value
(RV ) tells us that a confounder explaining just 5.5% of the residual variance in violence and
in support for peace would be enough to eliminate this effect entirely (i.e. if such a confounder
existed, the observed result would be due entirely to bias).13 The RVα=0.05 tells us what strength
of confounding would be required to reduce the estimated effect to the boundary of statistical
significance at the α= 0.05 level. Here, no confounder is required to do this since the p-value
is already above 0.05. Finally, the value of R2
YD|Xis equivalent to an “extreme scenario”
analysis: if an unobserved confounder explains 100% of the remaining outcome variation, such a
confounder would have to explain only 0.32% of the residual variation in the violence treatment
in order to reduce the estimated effect to zero.14
In Model 2, having taken the commonly employed approach of adding several control co-
variates, the effect estimate is now slightly larger (0.61) and more statistically significant in
conventional terms with a t-statistic of 2.11. Still, a confounder explaining only 6.1% of the
residual variation in both violence and support for peace would eliminate the effect; a con-
founder explaining only 0.4% of both would reduce the effect to the boundary of insignificance
13Confounders explaining more than 5.5% of either exposure to violence or of support for peace, but less on the
other, can be considered using contour plots such as those below.
14Deciding whether a given partial R2value is “large” or “small” is the subject of additional analyses below given
context-specific knowledge. However, to be clear on the meaning of these quantities, it is useful to recall that these
partial R2values correspond literally to squared correlations. Thus, taking the square root of any R2allows interpre-
tation on the usual correlation scale. That is, if a confounder is said to explain 0.32% of the residual variance in D
conditional on X, for example, it means that cor(DX, Z X) = 0.0032 0.057. In a context like ours where there
is ample scope for unknown variables to influence treatment and outcome, this is a weak correlation indeed. To aid in
providing intuition for correlations of this size: on would observe a finite sample correlation of this or larger by chance
alone if you draw 200 observations from two standard normal variables that are actually independent.
12
at the α= 0.05 level. We conclude that even for models that are favorable to the violence
hypothesis, very small confounders would alter our conclusions regarding the role of violence in
support for peace. We believe readers would not have trouble coming up with confounders of
this magnitude or larger – such as sympathy for FARC – and it is certainly hard to argue why
no such confounder should exist.
To extend this analysis, we can visualize the bias as we separately vary the strength of the
confounding in terms of the treatment and outcome associations (Figure 2). On this plot, we
also demonstrate the ability to bound confounding, subject to a specified assumption. Here, let
us assume that political affiliation is “worse” than confounding, in the sense that it explains
a greater share of both the outcome and treatment than can true confounding. We proxy for
this using vote share for Santos in 2010, rather than 2014, to ensure it is pre-treatment with
respect to 2011–2015 violence. Such an assumption may be reasonable with respect to the
outcome variance explained, but it is hard to argue that political affiliation explains more of
the remaining variation in exposure to violence than all other confounders could. Nevertheless,
even under this relatively optimistic assumption, the results show that it permits a degree of
confounding that could still alter our conclusion. The point in Figure 2marked “1x santos10”
indicates what the adjusted estimate would be had confounding of this strength been present.
The adjusted estimate, at -0.24, shows that this level of confounding would imply the estimate
has the opposite sign as the original estimate (0.61).
Finally, we argue that an important role for sensitivity analysis is to aid readers and reviewers
in assessing sensitivity even when authors may not have provided these analyses. To demon-
strate this and compare our results to existing estimates, we consider the robustness of the results
reported in Tellez (2019b), which estimates the effect of being in a (government-classified) “con-
flict zone” on reported attitudes toward components of the peace deal in AmericasBarometer
surveys.15 Using the formulas given above, the RV and R2
YD|Xcan both be easily reproduced
from any regression table using a t-statistic and degrees of freedom. We use the models reported
in Online Appendix Table A5, which are the main regression models. We estimate the degrees
of freedom to be approximately 4,200, as there are “roughly 4,200 observations” (Tellez,2019b,
pg. 13). Across the three models shown there, the most favorable from a robustness point was
15Conflict zones are determined by the Colombian government as part of the Espada de Honor campaign to defeat
FARC and other criminal groups.
13
Figure 2: Contour plot showing sensitivity to hypothesized confounding.
Hypothetical partial R2 of unobserved confounder(s) with the treatment
Hypothetical partial R2 of unobserved confounder(s) with the outcome
−6
−5.5
−5
−4.5
−4
−3.5
−3
−2.5
−2
−1.5
−1
−0.5
0
0.5
0.0 0.1 0.2 0.3 0.4 0.5
0.0 0.1 0.2 0.3 0.4 0.5
Unadjusted
(0.61)
1x santos10
(−0.244)
Note: Contours showing adjusted regression coefficient on recent violence, at levels of hypothesized confounders
parameterized by the strength of relationship to the treatment (FARC-caused deaths in the municipality) and the
outcome (municipal vote for the FARC peace deal). The bound (“1 x santos10”) shows the worst confounding that
can exist, were we to assume that confounding is “no worse” than the vote share for Santos in 2010 in terms of the
residual variance of the treatment and outcome they explain. A confounder this bad would easily change the sign of
the estimate.
Model (2) in Table A5 of that paper, which had an effect estimate of 0.22, and a standard error
of 0.07, for a t-statistic of 3.14. This translates to a RV of 4.7%, indicating that a confounder
explaining 4.7% of the residual variation in who is assigned to a conflict zone and in attitudes
toward peace would be sufficient to eliminate the estimated effect. The effect would lose sta-
tistical significance at conventional levels with a confounder explaining just RVα=0.05=1.8% of
these two residual variances. Finally, a confounder explaining all the remaining variation in the
outcome need only explain 0.2% of who lives in a conflict zone in order to explain away the
effect. Even a confounder explaining only 25% of the residual variation in the outcome would
eliminate the estimated effect if it explains just 1% of residual variation in who is in a conflict
zone. Thus, we conclude that existing estimates for the effect of violence on votes for the peace
deal are similarly fragile to our own.
4.4 Is the political affiliation hypothesis defensible?
Next we examine the political affiliation hypothesis. We use the term “political affiliation” for
this treatment but note that it is a shorthand: The key feature is not an individual’s support for
14
a given leader, but whether the leader whom an individual supports publicly endorses the peace
deal. In the present case, this simplifies to the question of whether a person supports President
Santos. There are two ways to imagine the counterfactual outcome defining the treatment effect
of interest. First, we can imagine how an individual might have voted in the referendum had
they been loyal to a different leader but were otherwise unchanged. Though an individual’s
loyalties are associated with a variety of background factors, there is certainly enough room for
variation that one can imagine this counterfactual. Alternatively, we can imagine an individual’s
vote in the FARC referendum, had their leader taken the opposite position. This is an easy
possibility to entertain as well, since Santos was in fact against a peace deal with the FARC
until 2012.
We estimate the coefficients in the model,
Model 3: Yi=β0+β1(Santos 2014i) + β2(Deathsi,20102013) + β3(Elevationi)+
β4(GDP pci) + β5(P opulationi) + , (7)
where Yiis the proportion voting “Yes” in municipality i,Santos 2014iis municipality vote
share for Santos in the 2014 presidential election, and Deathsi,20102013 is the total number of
deaths due to FARC violence between 2010 and 2013 in municipality i.16 In this model we also
control for total number of eligible voters (P opulationi), as well as the mean elevation above
sea level (Elevationi) and GDP per capita of the municipality (GDP pci). These three control
variables are chosen because they are unlikely to be affected by the treatment and because
they are troubling potential confounders in as much as they could arguably be related to both
political preferences and support for the FARC peace deal.
We show regression results augmented with sensitivity statistics in Table 2. The estimated
effect of Santos 2014 vote share on support for peace (0.67) is positive and statistically significant.
Vote share for Santos in 2014 explains 59% (R2
YD|X) of the residual variation in support
for peace, meaning that even confounding that explains 100% of the residual variation in the
outcome would need to explain 59% of the residual variation in vote share for Santos in order
to eliminate the estimated effect. Confounding that explains equal portions of vote share for
16Note that we use deaths between 2010–2013 as the pre-treatment covariate here, as the presidential election
occurred in 2014.
15
Santos and support for peace would have to explain 68% of both (RV ) to eliminate the effect.
Recall that this implies confounding whose correlation with the treatment and outcome exceeds
68% 82% after accounting for the other covariates, an extremely high correlation by any
standard. Finally, for the 95% confidence interval to just include zero, confounding would have
to explain 66% of residual variance in both treatment and outcome (RVα=0.05).
Table 2: Augmented regression results for political affiliation
Outcome: Vote for peace deal
Treatment: Est. SE t-stat R2
YD|XRV RVα=0.05 df
3. Santos 2014 vote share 0.67 0.02 37.5 59% 68% 66% 983
Here again, where other studies employed OLS we can determine how sensitive their results
would be as well. In Krause (2017), the coefficient for Santos 2014 vote share in a similar model
is 0.62, close to our estimate of 0.67. The t-statistic of 45 together with 1,069 residual degrees
of freedom would produce an RV of 72%, also similar to our own estimate of 68%.17
Recall that large values such as these tell us only that large confounders would be required
to alter our conclusions — they say nothing of whether such confounders exist. One impor-
tant potential confounder is attitudes toward the FARC. In particular, we could imagine that
Colombians decided how to vote in the referendum based on how they felt about FARC and
that attitudes toward FARC influenced their presidential vote in 2014. One particular version
of this confounder comes through campaigning on the FARC issue. Santos announced negoti-
ations with FARC in 2012, and this was a salient issue in the 2014 election. Thus, attitudes
toward the FARC deal could have influenced vote choice in 2014 for at least some Colombians.
Ideally, we could control for such a confounder with some (pre-treatment) measure of FARC
attitudes, but we have not been able to find such a variable.18 We note, however, that we can
replace the 2014 vote share as our treatment with the 2010 vote share, in which peace with the
FARC was not a particularly salient issue. Doing so, we still find that the relationship between
political affiliation (as measured in 2010) and support for the 2016 peace deal would take large
confounding to overturn (R2
YD|X= 23%, RV = 42%).
17These values were taken from Model (2) of Table III, pg. 32. Note that this RV is an approximation because
Krause (2017) reports Huber-White standard errors, whereas the formula for computing the RV from regression
results calls for the conventional (spherical) standard error. If the conventional standard error were 20% larger than
the Huber-White standard errors, for example, the RV would instead be 66%.
18While AmericasBarometer does ask respondents about their attitudes toward FARC and has for years, those data
cover less than 6% of the municipalities in our data.
16
The inability to “find” every confounder of interest is, of course, common in these studies.
Yet, further progress can be made using the bounding approach described above, transforming
assumptions or statements about how confounding compares to observables into implied bounds
on that confounding. One potential confounder is GDP per capita, which we expect might affect
both treatment (political affiliation) and the outcome (support for the deal). Figure 3shows the
contour plot, to which we add a bound based on an assumption that “confounding is no more
than three times ‘worse’ than GDP per capita,” in terms of the residual variation the confounder
would need to explain in whom the voter supports and in support for peace (“3x gdppc”).19 The
dashed line indicates the bound at which the result would be eliminated. A confounder three
times as strong as that of GDP per capita would hardly reduce the estimate – from 0.67 to 0.64.
Similarly, Elevation can also be used to formulate such a bounding assumption, as this relates
to a wide variety of factors that may in turn relate to both political affiliation and preferences
for peace with the FARC. Let us therefore assume that confounding is no more than three times
“worse” than Elevation (“3x elev”). Again, the worst confounding that is possible under such
an assumption would still hardly change the result, from 0.67 to 0.64. We emphasize that these
bounds are linked to assumptions. In this case, while it is hard to imagine confounders more
than three times “worse” than GDP per capita, we do not have sufficient knowledge (about
what influences treatment or outcome) to ensure that no such confounder exists. We therefore
do not regard these bounds as proof that our results are robust to confounding. Rather, they
are “if-then” statements describing how strong confounding would have to be relative to these
covariates in order to be problematic.
Finally, we may be willing to make or probe assumptions — even pessimistic ones — about
how much of the unexplained variance in the outcome could possibly be linked to confounding.
We already know from the R2
YD|Xvalue in Table 2that a confounder explaining 100% of the
residual variance of the outcome would need to explain 59% of the residual variance in political
affiliation in order to overturn the result. Figure 4provides an “extreme scenario” analysis
that extends this reasoning. Each line shows what the adjusted effect estimate would be if
confounding explains a proportion of the residual outcome (100%, 50%, or 30%) while explaining
19Note that we may wish to consider a kvalue for GDP per capita even higher than 3 to determine how far this
robustness. As it turns out the maximum possible kvalue on this variable is 3.88. Such a proposed confounder would
explain all of the residual variance of either the treatment or the outcome, and so a proposed confounder higher than
this cannot exist. At k= 3.88, the point estimate still barely changes, to 0.63.
17
Figure 3: Effect of unobserved confounding on estimate for political affiliation
Hypothetical partial R2 of unobserved confounder(s) with the treatment
Hypothetical partial R2 of unobserved confounder(s) with the outcome
−0.8
−0.7
−0.6
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.0 0.2 0.4 0.6 0.8
0.0 0.2 0.4 0.6 0.8
Unadjusted
(0.67)
3x gdppc
(0.637) 3x elev
(0.638)
Note: Contours showing adjusted regression coefficient on political affiliation, at levels of hypothesized confounders
parameterized by the strength of relationship to the treatment (municipal vote for Santos in 2014) and the outcome
(municipal vote for the FARC peace deal). The two bounds (“3 x gdppc” and “3x elev”) show the worst confounding
that can exist, were we to assume that confounding is “no more than 3-times ‘worse’” than either GDP per capita or
elevation. The dashed line indicates where the result would be eliminated.
the proportion of treatment indicated by the horizontal axis. We see that less conservative
confounding that explains 50% or 30% of the residual outcome variation would have to explain
over 70% and 80% of the residual variation in political affiliation, respectively, to reduce the
estimate to zero.
5 Discussion
What can we conclude in the case of the FARC referendum with the aid of sensitivity analyses?
Conventional regression analyses (can) find results consistent with media accounts and prior
academic work: both exposure to violence and political affiliation show “statistically and sub-
stantively significant” relationships with support for the referendum. Sensitivity results reveal
a great deal more. Augmenting the regression results with the RV immediately shows that
the estimated effect of exposure to violence on support for the peace deal is extremely fragile
in the face of potential confounding. Even under the most favorable model shown here, con-
18
Figure 4: Extreme scenario analysis
0.0 0.2 0.4 0.6 0.8
−1.0 −0.5 0.0 0.5
Hypothetical partial R2 of unobserved confounder(s) with the treatment
Adjusted effect estimate
Hypothetical partial R2 of unobserved confounder(s) with the outcome
100% 50% 30%
Note: Plot showing the extreme scenarios in which confounding explains 30%, 50%, and 100% of the residual outcome
(municipal vote for the FARC peace deal) is explained by hypothesized confounding. The horizontal axis indicates
a hypothesized proportion of residual variance explained in the treatment (municipal vote for Santos in 2014). A
confounder explaining 100% of the residual variation in the outcome would need to explain 59% of the residual
variance in the treatment (equivalent to R2
YD|X), while a confounder explaining 50% of the residual variation in the
outcome would need to explain about 75% of residual variation in the treatment. A confounder explaining 30% of the
residual variation in outcome would need to explain over 80% of residual variation in the treatment to overturn the
result.
founding that explains just 6.1% of the residual variance of exposure to violence and support for
peace would eliminate the result entirely, and a confounder explaining just 0.4% would reduce
it below conventional levels of statistical significance. Estimates from prior work (Tellez,2019b)
are found to be similarly fragile. Sensitivity analysis also provides useful insights into the esti-
mated effect of political affiliation. In particular, a confounder explaining 100% of the residual
variation in support for peace would have to explain 59% of the residual variation in political
affiliation to alter our conclusions. The bounding approach then tells us that, for example, if
confounders explain no more than three times the residual variance (in both political affiliation
and in the referendum vote) explained by municipal GDP per capita, the remaining confounding
is bounded to a level that barely changes the point estimate.
Sensitivity analyses have thus helped to determine, first, that conclusions regarding the role
19
of exposure to violence in shaping the referendum vote are currently too fragile to hold in high
confidence. Second, regarding the role of political affiliation in shaping referendum support,
we must remember that the high degree of robustness does not rule out the possibility that
confounding has altered our conclusion. However, the lesson is that not “just any confounder”
would be sufficient to meaningfully change our conclusion. A colleague or reviewer suggesting
a particular confounder is obligated to argue that such a confounder could plausibly explain
the amount of variation in treatment and outcome required by the sensitivity analysis to alter
the results. In both cases, sensitivity analyses leave the door open for progress. For example,
we can build on these results if investigators can find confounders that might be sufficient (in
the political affiliation case) and that can be measured or included; if convincing variables with
which to argue for bounds are found; or if new strategies or arguments that effectively limit
the degree of possible confounding emerge. In our view, this approach enables a much more
productive and precise debate as compared to challenging only whether a finding is causally
identified or not in binary terms.
Different tools will be best suited to different circumstances and user preferences. Here we
chose the omitted variable bias approach and tools developed in Cinelli and Hazlett (2020)
for several reasons. First, we are interested in the sensitivity of estimates made using linear
regression. Other important sensitivity methods are specialized to non-regression approaches,
such as matching (Rosenbaum,2002), which would be difficult here given that we examine two
non-binary treatments. Sensitivity analyses for more general or non-parametric estimation pro-
cedures are possible, at the cost of requiring the user to make more elaborate characterizations
of proposed confounding. For example, the “confounding function” approach (Heckman et al.,
1998;Robins,1999;Brumback et al.,2004;Blackwell,2013) generalizes across estimators, in
cases with binary treatments. It requires the user to describe how the treated and untreated
units would vary in their expectations of both the treated and untreated potential outcome,
conditionally on the covariates. If investigators are willing to examine a linear model, as we and
many others do, the bias can instead be determined solely by the two parameters (or isomor-
phic variations thereof) deriving from omitted variable bias.20 Second, the availability of the
20Further, among approaches to linear outcome models, Imbens,2003,Carnegie, Harada and Hill,2016 and Dorie
et al.,2016 all require further assumptions beyond these two required ones, asking users to specify the distribution
of the confounder as well as specifying the functional form of the treatment assignment mechanism. Relatedly, the
approach taken by Altonji, Elder and Taber (2005) and Oster (2017) employs a sensitivity parameter intended to reflect
the relative predictive power of observables and unobservables in the selection (into treatment) process. However, this
20
R2
YD|Xand particularly the RV as an interpretable, easy to convey, easy to compute sensitiv-
ity measures proves useful here.21 Finally, this method for bounding/benchmarking corrects for
issues in informal “benchmarking” practices employed in several other approaches.22
We emphasize that formal sensitivity analyses, regardless of tools employed, provide infor-
mation that neither conventional significance testing nor informal guidance based on t-statistics
or p-values can offer. Here, the sample sizes were similar for all models, but when considering
variation in sample size or degrees of freedom, t-statistics and p-values do not reflect how strong
confounding must be to alter our conclusions. For example, a coefficient with a t-statistic of 10
and only 200 degrees of freedom has an RV of 50%, meaning that confounding would have to
explain 50% of the residual variation in treatment and outcome to explain away the estimate.
However with one million degrees of freedom, the same t-statistics corresponds to an RV below
1%. Consequently, the RV or other sensitivity analyses provide an important complement to
debates over the appropriate p-value (such as those triggered by the American Statistical Asso-
ciations new guidelines, Wasserstein, Lazar et al.,2016), since p-values say nothing about the
robustness of a result to unobserved confounding.23
Finally, the specific applied example and toolkit employed here illustrate how the sensitivity-
based framework, if more widely adopted, would improve the way observational research is
conducted, communicated, and evaluated. First, as demonstrated, these tools provide a rigorous
and transparent way to investigate and improve upon our answers to important questions for
which we currently lack a feasible identification strategy that ensures zero confounding bias.
Second, this approach suggests improvements to and standards for empirical research seeking to
parameter is more complicated to interpret than may at first seem, because it implicitly also requires contemplating
how the observables and unobservables predict the outcome, as shown in Cinelli and Hazlett (2020).
21A related approach is the E-value of VanderWeele and Ding (2017), which applies to relative risk estimates.
22Informal benchmarking approaches such as those advocated in Imbens (2003); Hosman, Hansen and Holland
(2010); Dorie et al. (2016); Carnegie, Harada and Hill (2016); Middleton et al. (2016); Hong, Qin and Yang (2018) aim
to build intuition for the user by showing how a confounder “not unlike” an observed covariate in terms of its strength
of relationship to the treatment and outcome would alter our conclusions. However, as shown in Cinelli and Hazlett
(2020), those approaches can be misleading principally because even if confounding is assumed to be orthogonal to
the included covariates, they become dependent when conditioning on the treatment. Frank (2000) largely avoids this
concern by not conditioning on the treatment during benchmarking.
23That t-statistics cannot directly speak to sensitivity can be understood from the formulas, but also by distinguishing
statistical concern from identification problems. The t-statistic can be increased simply by increasing the sample size
(drawn from a common distribution), while the degree of confounding is an identification concern, impervious to sample
size. An interesting consequence is that if the publication process strongly selects for papers with larger-enough t-
statistics but without requiring larger t-statistics from papers with larger samples, then we may find that published
papers with larger samples actually tend to be more sensitive to confounding. Routinely reporting statistics such as
the RV would make readers aware of a result’s sensitivity, and help to counter the (incorrect) tendency to believe that
larger sample sizes alone make results more robust against confounding.
21
make causal claims using regression estimates. Scholars are often concerned with the robustness
of their results, but we currently lack a common standard by which to evaluate robustness.
Summary sensitivity quantities reported in augmented regression tables, as illustrated here,
provide readily interpretable information about one dimension of a result’s fragility – sensitivity
to unobserved confounding. Further, while we have emphasized the use of these tools in cases
where identification opportunities are unsatisfying, these tools are applicable in cases where
investigators have stronger research designs. Suppose, for example, that treatment was meant
to be randomized, but implementation was imperfect, introducing systematic confounding. If
an investigator argues that, given the actual randomized design implemented, R2
DZ|Xalmost
certainly falls below some limit, then the contour plots and extreme-scenario plots provide
detailed information about the potential for bias.
Finally, these tools have broader implications for how reviewers and the research community
in general judge both the credibility and value of research projects seeking to make causal
claims from observational data. First, they give reviewers and readers a way of assessing how
susceptible results are to confounding. Second, in place of a generic and qualitative debate about
“any possibility of confounding,” these tools encourage critics to raise concerns about specific
confounders they can argue may be strong enough to make a difference, according to sensitivity
results. Third, a sensitivity-based approach may change how we value empirical projects under
challenging identification scenarios. A paper need not be judged by whether it convinced us that
the design leaves zero confounding, but rather by how it informs our understanding of results
under degrees of confounding that may plausibly exist.
References
Altonji, Joseph G, Todd E Elder and Christopher R Taber. 2005. “An evaluation of instrumental
variable strategies for estimating the effects of catholic schooling.” Journal of Human resources
40(4):791–821.
Bauer, Micheal, Christopher Blattman, Julie Chytilov´a, Joseph Henrich, Edward Miguel and
Tamar Mitts. 2016. “Can War Foster Cooperation?” Journal of Economic Perspectives
30(3):249–74.
22
BBC News. 2016. “Colombia referendum: Voters reject FARC peace deal.” BBC News .
URL: https://www.bbc.com/news/world-latin-america-37537252
Blackwell, Matthew. 2013. “A selection bias approach to sensitivity analysis for causal effects.”
Political Analysis 22(2):169–182.
Branton, Regina, Jacqueline Demeritt, Amalia Pulido and James Meernik. 2019. “Violence,
Voting & Peace: Explaining Public Support for the Peace Referendum in Colombia.” Electoral
Studies 61:1–13.
Brumback, Babette A, Miguel A Hern´an, Sebastien JPA Haneuse and James M Robins. 2004.
“Sensitivity analyses for unmeasured confounding assuming a marginal structural model for
repeated measures.” Statistics in medicine 23(5):749–767.
Canetti, Daphna, Julia Elad-Strenger, Iris Lavi, Dana Guy and Daniel Bar-Tal. 2017. “Exposure
to violence, ethos of conflict, and support for compromise: Surveys in Israel, East Jerusalem,
West Bank, and Gaza.” Journal of conflict resolution 61(1):84–113.
Carnegie, Nicole Bohme, Masataka Harada and Jennifer L Hill. 2016. “Assessing sensitivity
to unmeasured confounding using a simulated potential confounder.” Journal of Research on
Educational Effectiveness 9(3):395–420.
Chaudoin, Stephen, Jude Hays and Raymond Hicks. 2018. “Do we really know the WTO cures
cancer?” British Journal of Political Science 48(4):903–928.
Cinelli, Carlos and Chad Hazlett. 2019. “sensemakr: sensitivity analysis tools for OLS.” R
package Version 0.1 2.
Cinelli, Carlos and Chad Hazlett. 2020. “Making Sense of Sensitivity: Extending Omitted
Variable Bias.” Journal of the Royal Statistical Society, Series B (Statistical Methodology)
82(1):39–67.
URL: https://doi.org/10.13140/RG.2.2.25518.56644
Cornfield, Jerome, William Haenszel, E Cuyler Hammond, Abraham M Lilienfeld, Michael B
Shimkin and Ernst L Wynder. 1959. “Smoking and lung cancer: recent evidence and a
discussion of some questions.” Journal of National Cancer Institute (23):173–203.
23
avalos, Eleonora, Leonardo Fabio Morales, Jennifer S. Holmes and Liliana M. D´avalos. 2018.
“Opposition Support and the Experience of Violence Explain Colombian Peace Referendum
Results.” Journal of Politics in Latin America 10(2):99–122.
Dorie, Vincent, Masataka Harada, Nicole Bohme Carnegie and Jennifer Hill. 2016. “A flexible,
interpretable framework for assessing sensitivity to unmeasured confounding.” Statistics in
medicine 35(20):3453–3470.
Druckman, James, Erik Peterson and Rune Slothuus. 2013. “How Elite Partisan Polarization
Affects Public Opinion Formation.” American Political Science Review 107(1):57–79.
Fabbe, Kristin, Chad Hazlett and Tolga Sınmazdemir. 2019. “A persuasive peace: Syrian
refugees’ attitudes towards compromise and civil war termination.” Journal of Peace Research
56(1):103–117.
Frank, Kenneth A. 2000. “Impact of a confounding variable on a regression coefficient.” Socio-
logical Methods & Research 29(2):147–194.
Frank, Kenneth A, Spiro J Maroulis, Minh Q Duong and Benjamin M Kelcey. 2013. “What
would it take to change an inference? Using Rubin’s causal model to interpret the robustness
of causal inferences.” Educational Evaluation and Policy Analysis 35(4):437–460.
Franks, Alex, Alex D’Amour and Avi Feller. 2019. “Flexible sensitivity analysis for observational
studies without observable implications.” Journal of the American Statistical Association
(just-accepted):1–38.
Gallego, Jorge, Juan D Mart´ınez, Kevin Munger and Mateo V´asquez-Cort´es. 2019. “Tweeting
for peace: Experimental evidence from the 2016 Colombian Plebiscite.” Electoral Studies
62:102072.
Grossman, Guy, Devorah Manekin and Dan Miodownik. 2015. “The political legacies of combat:
Attitudes toward war and peace among Israeli ex-combatants.” International Organization
69(4):981–1009.
Guisinger, Alexandra and Elizabeth Saunders. 2017. “Mapping the Boundaries of Elite Cues:
How Elites Shape Mass Opinion across International Issues.” International Studies Quarterly
61(2):425–441.
24
Hadzic, Dino, David Carlson and Margit Tavits. 2017. “How exposure to violence affects ethnic
voting.” British Journal of Political Science pp. 1–18.
Hazlett, Chad. 2019. “Angry or Weary? How Violence Impacts Attitudes toward Peace among
Darfurian Refugees.” Journal of Conflict Resolution .
URL: https://doi.org/10.13140/RG.2.2.19863.06562
Heckman, James, Hidehiko Ichimura, Jeffrey Smith and Petra Todd. 1998. Characterizing
selection bias using experimental data. Technical report National bureau of economic research.
Hirsch-Hoefler, Sivan, Daphna Canetti, Carmit Rapaport and Stevan E Hobfoll. 2016. “Conflict
will harden your heart: Exposure to violence, psychological distress, and peace barriers in
Israel and Palestine.” British Journal of Political Science 46(4):845–859.
Hong, Guanglei, Xu Qin and Fan Yang. 2018. “Weighting-Based Sensitivity Analysis in Causal
Mediation Studies.” Journal of Educational and Behavioral Statistics 43(1):32–56.
Hosman, Carrie A, Ben B Hansen and Paul W Holland. 2010. “The Sensitivity of Linear
Regression Coefficients’ Confidence Limits to the Omission of a Confounder.” The Annals of
Applied Statistics pp. 849–870.
Imai, Kosuke, Luke Keele, Teppei Yamamoto et al. 2010. “Identification, inference and sensi-
tivity analysis for causal mediation effects.” Statistical science 25(1):51–71.
Imbens, Guido W. 2003. “Sensitivity to exogeneity assumptions in program evaluation.” The
American Economic Review 93(2):126–132.
Krause, Dino. 2017. “Who wants peace?- the role of exposure to violence in explaining public
support for negotiated agreements: A quantitative analysis of the Colombian peace agreement
referendum in 2016.” Unpublished Master’s Thesis, Uppsala University.
Levendusky, Matthew. 2010. “Clearer Cues, More Consistent Voters: A Benefit of Elite Polar-
ization.” Political Behavior 32(1):111–131.
Liendo, Nicol´as and Jessica Maves Braithwaite. 2018. “Determinants of Colombian Attitudes
toward the Peace Process.” Conflict Management and Peace Science 35(6):622–636.
25
Lupia, Arthur. 1994. “Shortcuts Versus Encyclopedias: Information and Voting Behavior in
California Insurance Reform Elections.” American Political Science Review 88(1):63–76.
Matanock, Aila and Miguel Garcia-S´anchez. 2017. “The Colombian Paradox: Peace Processes,
Elite Divisions & Popular Plebiscites.” Daedalus 146(4):152–166.
Matanock, Aila and Natalia Garbiras-D´ıaz. 2018. “Considering Concessions: A Survey Experi-
ment on the Colombian Peace Process.” Conflict Management and Peace Science 35(6):637–
655.
Matanock, Aila, Natalia Garbiras-D´ıaz and Miguel Garc´ıa-S´anchez. 2018. “Elite Cues and
Endorsement Experiments in Conflict Contexts.” Working Paper, presented at American
Political Science Association Annual Convention, 2018.
Middleton, Joel A, Marc A Scott, Ronli Diakow and Jennifer L Hill. 2016. “Bias amplification
and bias unmasking.” Political Analysis 24(3):307–323.
Moreno, Daniel. 2015. colmaps: Colombian maps. R package version 0.0.0.9515.
Nicholson, Stephen. 2012. “Polarizing Cues.” American Journal of Political Science 56(1):52–66.
Oster, Emily. 2017. “Unobservable selection and coefficient stability: Theory and evidence.”
Journal of Business & Economic Statistics pp. 1–18.
Pearl, Judea. 2009. Causality. Cambridge university press.
Pechenkina, Anna O. and Laura Gamboa. 2019. “Who Undermines the Peace at the Ballot
Box? The Case of Colombia.” Terrorism and Political Violence pp. 1–21.
Petersen, Roger and Sarah Zuckerman Daly. 2010. Anger, Violence, and Political Science. In
International Handbook of Anger: Constituent and Concomitant Biological, Psychological,
and Social Processes, ed. Charles Spielberger Michael Potegal, Gerhard Stemmler. Springer.
R Core Team. 2019. R: A Language and Environment for Statistical Computing. Vienna,
Austria: R Foundation for Statistical Computing.
URL: https://www.R-project.org/
26
Robins, James M. 1999. “Association, causation, and marginal structural models.” Synthese
121(1):151–179.
Rosenbaum, Paul R. 2002. Observational studies. In Observational studies. Springer pp. 1–17.
Rosenbaum, Paul R and Donald B Rubin. 1983. “Assessing sensitivity to an unobserved binary
covariate in an observational study with binary outcome.” Journal of the Royal Statistical
Society. Series B (Methodological) pp. 212–218.
Tellez, Juan Fernando. 2019a. “Peace agreement design and public support for peace: Evidence
from Colombia.” Journal of Peace Research 56(6):827–844.
Tellez, Juan Fernando. 2019b. “Worlds Apart: Conflict Exposure and Preferences for Peace.”
Journal of Conflict Resolution 63(4):1053–1076.
Vanderweele, Tyler J. and Onyebuchi A. Arah. 2011. “Bias formulas for sensitivity analysis of
unmeasured confounding for general outcomes, treatments, and confounders.” Epidemiology
(Cambridge, Mass.) 22(1):42–52.
URL: http://dx.doi.org/10.1097/ede.0b013e3181f74493
VanderWeele, Tyler J and Peng Ding. 2017. “Sensitivity analysis in observational research:
introducing the E-value.” Annals of internal medicine 167(4):268–274.
Wasserstein, Ronald L, Nicole A Lazar et al. 2016. “The ASA’s statement on p-values: context,
process, and purpose.” The American Statistician 70(2):129–133.
Weintraub, Michael, Juan Vargas and Thomas Flores. 2015. “Vote Choice and Legacies of
Violence: Evidence from the 2014 Colombian Presidential Elections.” Research and Politics
pp. 1–8.
Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge University Press.
27
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Civilians who have fled violent conflict and settled in neighboring countries are integral to processes of civil war termination. Contingent on their attitudes, they can either back peaceful settlements or support warring groups and continued fighting. Attitudes toward peaceful settlement are expected to be especially obdurate for civilians who have been exposed to violence. In a survey of 1,120 Syrian refugees in Turkey conducted in 2016, we use experiments to examine attitudes towards two critical phases of conflict termination – a ceasefire and a peace agreement. We examine the rigidity/flexibility of refugees’ attitudes to see if subtle changes in how wartime losses are framed or in who endorses a peace process can shift willingness to compromise with the incumbent Assad regime. Our results show, first, that refugees are far more likely to agree to a ceasefire proposed by a civilian as opposed to one proposed by armed actors from either the Syrian government or the opposition. Second, simply describing the refugee community’s wartime experience as suffering rather than sacrifice substantially increases willingness to compromise with the regime to bring about peace. This effect remains strong among those who experienced greater violence. Together, these results show that even among a highly pro-opposition population that has experienced severe violence, willingness to settle and make peace are remarkably flexible and dependent upon these cues.
Article
Full-text available
What factors led to the surprise defeat of the Colombian peace referendum? Initial analyses suggested a link between support for peace and the experience of violence, but economic conditions and political support for incumbent parties also affect electoral outcomes. We use Bayesian hierarchical models to test links between referendum result and previous violence victimization, economic conditions, and support for Centro Democrático (the main party opposed to the peace agreement). There was less support for peace in the Andean region than in other regions, and departments with lower support had higher unemployment and growth in GDP. Support for the opposition was the dominant co-variate of decreasing support for the peace accords, while previous violence victimization increased the proportion of votes for peace. In light of these results, regional variation in baseline support for the agreements-a complex variable governed by partisan engagement but also influenced by structural economic factors-will be critical during implementation of the newly revised accords.
Article
We extend the omitted variable bias framework with a suite of tools for sensitivity analysis in regression models that does not require assumptions on the functional form of the treatment assignment mechanism nor on the distribution of the unobserved confounders, naturally handles multiple confounders, possibly acting non‐linearly, exploits expert knowledge to bound sensitivity parameters and can be easily computed by using only standard regression results. In particular, we introduce two novel sensitivity measures suited for routine reporting. The robustness value describes the minimum strength of association that unobserved confounding would need to have, both with the treatment and with the outcome, to change the research conclusions. The partial R2 of the treatment with the outcome shows how strongly confounders explaining all the residual outcome variation would have to be associated with the treatment to eliminate the estimated effect. Next, we offer graphical tools for elaborating on problematic confounders, examining the sensitivity of point estimates and t‐values, as well as ‘extreme scenarios’. Finally, we describe problems with a common ‘benchmarking’ practice and introduce a novel procedure to bound the strength of confounders formally on the basis of a comparison with observed covariates. We apply these methods to a running example that estimates the effect of exposure to violence on attitudes toward peace.
Article
The decades-long Colombian civil war nearly came to an official end with the 2016 Peace Plebiscite, which was ultimately defeated in a narrow vote. This conflict has deeply divided Colombian civil society, and non-political public figures have played a crucial role in structuring debate on the topic. To understand the mechanisms underlying the influence of members of civil society on political discussion, we performed a randomized experiment on Colombian Twitter users shortly before this election. Sampling from a pool of subjects who had been frequently tweeting about the Plebiscite, we tweeted messages that encouraged subjects to consider different aspects of the decision. We varied the identity (a general, a scientist, and a priest) of the accounts we used and the content of the messages we sent. We found little evidence that any of our interventions were successful in persuading subjects to change their attitudes. However, we show that our pro-Peace messages encouraged liberal Colombians to engage in significantly more public deliberation on the subject.
Article
Electoral politics and violent civil conflict often coexist. Citizens exposed and unexposed to violence bear the costs of conflict unevenly and, thus, conceive of militant vs. accommodationist state response to the perpetrators of violence differently. The literature has found that victims of political violence tend to endorse militant state response against nonstate actors seen as responsible. This result is mostly based on secessionist conflicts in which victims of violence are often shielded from the costs of state counterinsurgency or counterterrorism campaigns. By contrast, we argue, in non-secessionist conflicts, individuals exposed to violence tend to also experience the state militant anti-guerrilla operations, which often lead to state abuses of civilians. We expect that civilians exposed to nonstate and state attacks will be more likely to support pro-peace policies. We find support for this argument analyzing Colombia’s 2014 presidential election and 2016 peace agreement referendum. In addition, we use original data on local candidates’ pro- and anti-peace process positions in Colombia’s 2014 congressional election to test the underlying logic of the argument that local communities exposed to both nonstate and state violence are more likely to demand pro-peace policies.
Article
How does political violence affect popular support for peace? We answer this question by examining Colombia, where in 2016 the people narrowly and unexpectedly voted against a peace agreement designed to end a half century of civil war. Building on research on the impact of political violence on elections as well as research on referendum/initiative voting in the United States, we argue that local experiences with violence and the political context will lead to heightened support for peace. We test these expectations using spatial modeling and a municipal-level data on voting in the 2016 Colombian peace referendum, and find that municipal-level support for the referendum increases with greater exposure to violence and increasing support for President Santos. These results are spatially distributed, so that exposure to violence in one municipality is associated with greater support for the peace referendum in that municipality and also in surrounding areas. Our findings have implications not only for Colombia, but for all post-war votes and other contexts in which referenda and elections have major and/or unexpected results.
Article
Conflict negotiations are often met with backlash in the public sphere. A substantial literature has explored why civilians support or oppose peace agreements in general. Yet, the terms underlying peace agreements are often absent in this literature, even though (a) settlement negotiators must craft agreement provisions covering a host of issues that are complex, multidimensional, and vary across conflicts, and (b) civilian support is likely to vary depending on what peace agreements look like. As a result, we know much less about how settlement design molds overall public response, which settlement provisions are more or less controversial, or what citizens prioritize in conflict termination. In this article, I identify four key types of peace agreement provisions and derive expectations for how they might shape civilian attitudes toward conflict termination. Using novel conjoint experiments fielded during the Colombian peace process, I find evidence that citizens evaluate agreements based primarily on how provisions mete out justice to out-group combatants, and further that transitional justice provisions produced sharp divisions among urban voters in the 2016 referendum. Additional analysis suggests that material, distributive concerns were particularly salient for rural citizens. The results have implications for understanding the challenge of generating public buy-in for conflict termination and sheds light on the polarizing Colombian peace process.
Article
A fundamental challenge in observational causal inference is that assumptions about unconfoundedness are not testable from data. Assessing sensitivity to such assumptions is therefore important in practice. Unfortunately, some existing sensitivity analysis approaches inadvertently impose restrictions that are at odds with modern causal inference methods, which emphasize flexible models for observed data. To address this issue, we propose a framework that allows (1) flexible models for the observed data and (2) clean separation of the identified and unidentified parts of the sensitivity model. Our framework extends an approach from the missing data literature, known as Tukey’s factorization, to the causal inference setting. Under this factorization, we can represent the distributions of unobserved potential outcomes in terms of unidentified selection functions that posit a relationship between treatment assignment and unobserved potential outcomes. The sensitivity parameters in this framework are easily interpreted, and we provide heuristics for calibrating these parameters against observable quantities. We demonstrate the flexibility of this approach in two examples, where we estimate both average treatment effects and quantile treatment effects using Bayesian nonparametric models for the observed data.
Article
This paper proposes a simple technique for assessing the range of plausible causal conclusions from observational studies with a binary outcome and an observed categorical covariate. The technique assesses the sensitivity of conclusions to assumptions about an unobserved binary covariate relevant to both treatment assignment and response. A medical study of coronary artery disease is used to illustrate the technique.