Article

What Is the Probability Your Vote Will Make a Difference?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

One of the motivations for voting is that one vote can make a difference. In a presidential election, the probability that your vote is decisive is equal to the probability that your state is necessary for an electoral college win, times the probability the vote in your state is tied in that event. We computed these probabilities a week before the 2008 presidential election, using state-by-state election forecasts based on the latest polls. The states where a single vote was most likely to matter are New Mexico, Virginia, New Hampshire, and Colorado, where your vote had an approximate 1 in 10 million chance of determining the national election outcome. On average, a voter in America had a 1 in 60 million chance of being decisive in the presidential election.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Likewise, the probability of casting the deciding vote in an election is low; in one United States presidential election, the probability was estimated at one in sixty million. 1 Given the low likelihood of casting a decisive ballot, researchers have argued over the efficacy and even rationality of voting. 2 Yet elections certainly influence collective action on problems like environmental pollution and climate change; Green party strength is associated with lower air pollution on a national scale, 3 and US states that vote for greener candidates have lower growth in carbon dioxide emissions 4 . ...
... For instance, the United States Electoral College devalues the voting power of individuals in states far from the national median partisanship. 1 This makes the likelihood of casting a pivotal vote for a Presidential candidate vanishingly small for many voters, and disproportionately likely for others (Table 1). However, in the case of US elections, participation may still be rationalized due to the opportunity to vote for candidates in other races or in ballot initiatives. ...
Article
Full-text available
Elections represent infrequent, high-leverage opportunities for everyday individuals to contribute to climate change mitigation. In this perspective, we present two ways of thinking about the climate impact of voting in elections. The first, “emissions responsibility,” intuitively apportions emissions to voters according to popular principles in carbon accounting and can be calculated for elections where there is a clear difference between major candidates (as in the 2019 Canadian federal election). The second approach, “expected emissions value,” is more probabilistic and can be used to investigate the rationality of participating in an election for climate-motivated voters. Building on these ideas, we discuss the possibility that “political carbon offsets” (donations to pro-climate politicians) could constitute a more effective and more equitable alternative to traditional, voluntary carbon offsets.
... Active political participation is in fact highly valued and is indicative of the importance of democracy at both country and global level. Accordingly, each and every vote counts and could be decisive to change the overall result against or in favor of a candidate and/or a party (Gelman, 2012). ...
... Even if the average American overestimated the probability of pivotality by a factor of 1000, a large collective action problem would remain in that voters would see themselves as deciding outcomes with one chance in 60 thousand rather than one in 60 million (Gelman et al., 2009). ...
Thesis
Full-text available
This thesis by papers uses rational choice theory to consider the relative performance of individual exit and collective voice in politics, as well as the causal relationships between exit and voice as individual strategies and institutionalised means of controlling government behaviour. Following the methodological approach of Geoffrey Brennan and Alan Hamlin, the papers of this thesis are examples of ‘revisionist public choice theory,’ retaining the broad framework of rational choice while relaxing one or more of the standard assumptions generally made by economists. In particular, the papers of this thesis consider otherregarding preferences, non-instrumental preferences, dispositional choice, epistemic rationality, non-efficiency evaluative standards, and non-equilibrium dynamics. By taking a revisionist approach, I am able to steer a path between the excessive abstraction of much public choice theory and the insufficient rigour of much normative political theory. Jointly, the papers of this thesis contribute to broad debates over the relative value of exit and voice in political settings, with relevance to questions of democracy versus the market, centralism versus localism, and bureaucracy versus market-like modes of governance. Though I cover a range of diverse topics in this thesis, I generally argue for a strongly revisionist approach to political analysis which sees significant behavioural differences between individual and collective decisions while grounding all action in common motivational assumptions.
... Examining a model in which the vote is between two alternatives, voters' preferences are drawn from a binomial distribution, and the pertinent probability for that distribution is itself drawn from another distribution, they derive that the probability of an individual being pivotal is on the order of 1/N in large-N elections. 2 Gelman et al. (2012) offer an empirical analysis of the 2008 presidential election and find that the probability of a vote being pivotal (taking into account complexities due to the Electoral College) was approximately 1 in 60,000,000 (the total turnout was roughly 130 million). In a number of states, the probability was less than 1 in 1,000,000,000; at the other end of the spectrum, in a handful of states, the probability was around 1 in 10,000,000. ...
Article
Full-text available
Who will vote quadratically in large-N elections under quadratic voting (QV)? First, who will vote? Although the core QV literature assumes that everyone votes, turnout is endogenous. Drawing on other work, we consider the representativeness of endogenously determined turnout under QV. Second, who will vote quadratically? Conditional on turning out, we examine reasons that, in large-N elections, the number of votes that an individual casts may deviate substantially from that under pure, rational QV equilibrium play. Because turnout itself is driven by other factors, the same determinants may influence how voters who do turn out choose the quantity of votes to cast. Independently, the number of votes actually cast may deviate dramatically from pure QV predictions because of the complex and refined nature of equilibrium play. Most plausibly, voting behavior and outcomes would be determined predominately by social and psychological forces, would exhibit few of the features emphasized in the analysis of hyper-rational equilibrium play, and would have consequential properties that require a different research agenda to bring into focus. Some of our analysis also has implications for voting behavior under other procedures, including one person, one vote.
... The winner is determined through the Electoral College, which typically awards all Electoral seats in the state to the candidate with the majority of votes in that state. The winner-take-all nature of the Electoral College gives some citizens a higher chance of being pivotal than others (Gelman et al. 2012). Both George W. Bush in 2000 and Donald Trump in 2016 won the presidency without winning the popular vote. ...
Article
Full-text available
We extend the Feddersen and Sandroni (Am Econ Rev 96(4):1271–1282, 2006) voter turnout model to include partisan districts, a battleground district, and an Electoral College. We find that expected voter turnout by a single party is highest in the battleground district, followed by the party’s majority district, which in turn is followed by the party’s minority district. Total turnout is higher in the battleground district than in the partisan districts, but the gap decreases as the level of disagreement in the partisan districts increases. Lastly, turnout in the battleground district decreases as the partisan districts become more competitive.
... A terceira linha de investigação dedica-se a aplicar os métodos de estudo da Big Data à análise de acontecimentos (event-focused analysis) bem como a produzir informação comportamental e económica ao nível individual, grupal e de rede a partir de dados recolhidos digitalmente. Este tipo de análises tem permitido a identificação de padrões e a produção de explicações e, aliado às análises previsionais, tornou possível a previsão de acontecimentos (Campbell, 2000;Clements & Hendry, 2011;Gelman, 2012;Bollen et al., 2010;Asur & Huberman, 2010;Gayo-Avello, 2012). ...
Article
Full-text available
Inversely to the dominant trend in predictive electoral studies in a digital environment, focused on the microblogue Twitter as a universe of analysis, this article develops an integrated ecological approach to the positioning of Portuguese public opinion during the electoral campaign for the European elections of 2014. The predictive potential of three categories is tested - party / coalition; head-of-list and list - as well as is the representativeness of the world wide web, considered a Big Data scientifically valid for Social Sciences research. The first category revealed a higher relative empirical value and the suitability of social networks for the anticipation of events is partially reinforced. However, there is a need for further studies, especially at the national level, where research in this area is practically non-existent.
... A terceira linha de investigação dedica-se a aplicar os métodos de estudo da Big Data à análise de acontecimentos (event-focused analysis) bem como a produzir informação comportamental e económica ao nível individual, grupal e de rede a partir de dados recolhidos digitalmente. Este tipo de análises tem permitido a identificação de padrões e a produção de explicações e, aliado às análises previsionais, tornou possível a previsão de acontecimentos (Campbell, 2000;Clements & Hendry, 2011;Gelman, 2012;Bollen et al., 2010;Asur & Huberman, 2010;Gayo-Avello, 2012). ...
Article
Full-text available
Inversely to the dominant trend in predictive electoral studies in a digital environment, focused on the microblogue Twitter as a universe of analysis, this article develops an integrated ecological approach to the positioning of Portuguese public opinion during the electoral campaign for the European elections of 2014. The predictive potential of three categories is tested - party / coalition; head-of-list and list - as well as is the representativeness of the world wide web, considered a Big Data scientifically valid for Social Sciences research. The first category revealed a higher relative empirical value and the suitability of social networks for the anticipation of events is partially reinforced. However, there is a need for further studies, especially at the national level, where research in this area is practically non-existent.
... When b is smaller, convergence is again improved, but not as dramatically. Consider b ¼ 3, roughly the value necessary according to the data of Gelman et al. (2010) to justify voting in a presidential election worth $5000 to a citizen who must spend $10 to vote, perhaps therefore a reasonable ballpark for real-world marginal voters. Inefficiency then decays with N À7=5 , much faster than the 1= ffiffiffiffi N p rate predicted by the fully rational model in this case. ...
Article
Full-text available
Lalley and Weyl (Quadratic voting, 2016) propose a mechanism for binary collective decisions, Quadratic Voting (QV), and prove its approximate efficiency in large populations in a stylized environment. They motivate their proposal substantially based on its greater robustness when compared with pre-existing efficient collective decision mechanisms. However, these suggestions are based purely on discussion of structural properties of the mechanism. In this paper, I study these robustness properties quantitatively in an equilibrium model. Given the mathematical challenges with establishing results on QV fully formally, my analysis relies on a number of structural conjectures that have been proven in analogous settings in the literature, but in the models I consider here. While most of the factors I study reduce the efficiency of QV to some extent, it is reasonably robust to all of them and quite robustly outperforms one-person-one-vote. Collusion and fraud, except on a very large scale, are deterred either by unilateral deviation incentives or by the reactions of non-participants to the possibility of their occurring. I am able to study aggregate uncertainty only for particular parametric distributions, but using the most canonical structures in the literature I find that such uncertainty reduces limiting efficiency, but never by a large magnitude. Voter mistakes or non-instrumental motivations for voting, so long as they are uncorrelated with values, may either enhance or harm efficiency depending on the setting. These findings contrast with existing (approximately) efficient mechanisms, all of which are highly sensitive to at least one of these factors.
... Prediction of election outcomes through sample surveys are usually done with pre-poll and exit poll scenarios. In the recent years, since the election prediction of Nate Silver on 2012 US presidential polls (Gelman et al., 2012), many AI based prediction methods were used to predict the elections in various countries over the past decade. Sentiment Analysis using various machine learning algorithms used. ...
Conference Paper
Full-text available
Sentiment Analysis is gaining popularity in mood mapping the people using the advances of Artificial Intelligence. Many elections are being predicted using the texts collected the micro blogging site twitter. Decision Tree based architectures were tried on 2019 Indian General Election and 2020 Delhi Election predictions. When a preliminary study was done to predict the Tamil Nadu State Assembly elections held in May 2021, the data collected indicated a need for translingual support for predicting the sentiment of people during the course of poll campaign. Since this election is predominantly revolving around a majority Tamil speaking heartland, a new architecture was developed and analyzed. The results using a proposed translingual architecture predicted was close to the actual results and the statistical insights to prove this method shows the evidence that this method is better than qualitative and quantitative surveys done by media for pre poll and exit poll prediction.
... And even where these nuances are minimized, the costs of obtaining information are positive while the benefits are small given the limited influence of any single voter on the outcome of elections (see Bohanon and Van Cott 2002, Heckelman 2003, Gelman, Silver, and Edlin 2012. The result is rational ignorance whereby citizens fail to obtain the requisite information to punish or reward elected officials. ...
... 1 A key aim of this spending is to influence the outcome of elections by encouraging voters of the same party to turnout and to elicit votes from uncertain voters. However, some voters have a much larger probability of influencing an election than others (Chamberlain, Rothschild et al. 1981;Gelman, Katz and Bafumi 2004;Gelman, Silver and Edlin 2012) To ensure that spending maximizes electoral success, it is therefore critical that resources are allocated efficiently. ...
Article
Full-text available
How should political parties allocate resources in U.S. House elections? Are actual spending strategies optimal? This paper answers these questions by using Bayesian election forecasts to estimate a probabilistic voting model. The model provides real-time estimates of the marginal value of additional resources in a district during a campaign and can be used to compare actual spending patterns to the amount that should have been spent according to the model. The correlation between observed and optimal spending is over 0.5 in each non-redistricting year from 2000 to 2010 and observed spending patterns respond to new polls during a campaign. The correlations are consistent across different types of campaign donors including the Democratic Congressional Campaign Committee and the National Republican Congressional Committee, various political action committees, and individuals. There is also evidence that spending is based on maximizing total seats rather than the probability of winning a majority of seats.
... 2 Estimated probabilities of pivotality are extremely low in large elections. For example, Gelman et al. (2012) estimate for the 2008 US presidential election that the empirical probability with which a single vote is pivotal (i.e., changes the election outcome) is, on average, 1 in 60 million. The game-theoretic approach of Myerson (2000) implies even lower estimates of pivot probabilities in large elections (about 1 in 8 billion). ...
Chapter
Full-text available
Standard economic reasoning assumes that people vote instrumentally, i.e., that the sole motivation to vote is to influence the outcome of an election. In contrast, voting is expressive if voters derive utility from the very act of expressing support for one of the options by voting for it, and this utility is independent of whether the vote affects the outcome. This paper surveys experimental tests of expressive voting with a particular focus on the low-cost theory of expressive voting. The evidence for the low-cost theory of expressive voting is mixed.
... In more realistic settings -where the individual recognizes that some other voters are committed to supporting each candidate, where abstention is possible, where elections are structured by constituencies or an electoral college -these results are modified, but the substantial conclusion that the probability of being decisive is extremely small is maintained. 7 6 For early analysis and calculation on this basis see Beck 1975, Chamberlain andRothschild 1981, for detailed discussion see Brennan and Lomasky 1993 chapter 4. 7 For example, in a more empirically informed calculation using a state-by-state methodology based on data from contemporaneous opinion polls Gelman, et al. (2012) calculate that the probability of being decisive for an average voter in the 2008 US Presidential election was approximately 1 in 60 million. Usher (2014) argues that the standard approach to calculating the probability of being decisive based on randomization at the level of the individual significantly underestimates the probability of being decisive and that we should calculate on the basis of randomization at the level of the whole electorate. ...
Chapter
Full-text available
... 153). However, when we look at the likelihood that an individual voter in any given state will be pivotal (e.g., using game theoretic indices of pivotality such as the Banzhaf index (Banzhaf 1965) or the Shapley-Shubik value (Shapley and Shubik 1954; see also Mann and Shapley 1962) as far back as Owen (1975) it has been recognized that these two effectsgreater large state pivotality and small state overrepresentation relative to population tend in opposite directions, making the a priori "power" scores of individual votes to influence EC outcomes much more similar across states than one might think (see Gelman, Silver, and Edlin 2012; cf. discussion in Grofman and Feld 2005;Strömberg 2008 noted above, the academic and journalistic community has its skeptics about electoral college reform, with those in opposition to change noting, among other things, that proposed remedies have unknown qualities and are unlikely to cure problems such as a campaign focus on the larger states, and may bring new problems with them, e.g. ...
Article
Full-text available
Objectives We offer a typology of possible reforms to the Electoral College (EC) in terms of changes to its two most important structural features: seat allocations that are not directly proportional to population and winner‐take‐all outcomes at the state level. This typology allows us to classify four major variants of “reform” to the present EC in a parsimonious fashion. Many of the proposals we consider have been suggested by well‐known figures, some debated in Congress, and they include what we view as most likely to be taken seriously. We evaluate these proposals solely in terms of one simple criterion: “Would they be expected to reduce the likelihood of inversions between EC and popular vote outcomes?” Methods We answer this question by looking at the data on actual presidential election outcomes at the state level over the entire period 1868–2016, and at the congressional‐district level over the period 1956–2016. We consider the implications for presidential outcomes of these different alternative mechanisms, in comparison to the actual electoral outcome and the popular vote outcome. In addition, we consider the implications of a proposal to increase the size of the U.S. House (Ladewig and Jasinski, 2008). Results Our results show that inversions from the popular vote happen under all proposed alternatives at nearly the same rate as under the current EC rules, with some proposals actually making inversions more frequent. Conclusions The major difference between the present EC rule and alternative rules is not in frequency of inversions, but is in which particular years the inversions occur. As for the proposal to increase the size of the House, we show that any realistic increase in House size would have made no difference for the 2016 outcome.
... The intuition behind the rational ignorance model is simple: Since the probability of casting a decisive vote in a real-world election is approximately zero, decision costs will dominate the voting calculus. For example, Gelman et al. (2009) estimate that the average voter in the 2008 United States presidential election had a one in 60 million chance of deciding the outcome. A voter who valued the electoral result at $1 million dollars would be 1 3 ...
Article
Full-text available
Rational ignorance and related models of voter choice have been accused of psychological implausibility or even incoherence. Although such models run counter to folk psychological understandings of choice, this paper argues that they are consistent with widely-accepted dual process theories of cognition. Specifically, I suggest that political ignorance can be explained via a “default-interventionist” account in which a biased intuitive subsystem produces automatic responses which are overridden by rational reflection when the prospective costs of error are significant. This is consistent with rational ignorance and related theories of political ignorance and bias. Providing stronger psychological foundations for rational ignorance also suggests new ways in which the theory might be developed to increase its predictive, analytic, and evaluative power.
... Third, instead of having to rely on only one probability of voter decisiveness every 4 years, the Electoral College creates the 15 See Table 7.1 and Brennan and Lomasky's (1993, p. 119) discussion. 16 Gelman et al. (2012) consider the Electoral College as an influence on the probability of voter decisiveness in an article on a prediction they made about the 2008 US presidential election. That article predicted far smaller differences between groups of states in the probability of voter decisiveness than I find from ex post evidence. ...
Article
Full-text available
When considering the probability of voters being decisive in presidential elections, public choice economists typically proceed as if the probability can reasonably be approximated by assuming that the winner is determined by simple majority vote. It is well known the Electoral College can cause presidential candidates to lose elections despite winning the popular vote (an election inversion). But the Electoral College’s ability to increase the probability of some voters being decisive effectively has been ignored, despite such increases having occurred in 10 of the last 49 presidential elections. By examining the influence of the Electoral College in most of the presidential elections from 1824 to 2016, I explain and give examples of how the probabilities of some voters being decisive were elevated above what they would have been under majority rule, with the increase being truly astounding in several cases. The examples do not weaken the importance of expressive voting, but our understanding of such voting is improved by considering the probability effects of the Electoral College on voter decisiveness.
... The weighting implications of ensuring an equal say for all voters in a two-tier system were first formally considered by Lionel S. Penrose in 1946. 4 The institutional design of a 2 Combining US voter figures with poll data, Gelman et al. (2012) estimated chances for a single vote being decisive in the 2008 presidential elections as about 1 in 60 million, but up to 1 in 10 million in some small and midsize states who were near the national median politically. Persuading 500 people in, e.g., New ...
Article
Full-text available
Which voting weights ought to be allocated to single delegates of differently sized groups from a democratic fairness perspective? We operationalize the ‘one person, one vote’ principle by demanding every individual’s influence on collective decisions to be equal a priori. The analysis differs from previous ones by considering intervals of alternatives. New reasons lead to an old conclusion: weights should be proportional to the square root of constituency sizes if voter preferences are independent and identically distributed. This finding is fragile, however, in that preference polarization along constituency lines quickly calls for plain proportionality.
... The latter is more general in the sense that agents' "noises" are not independent. There is also a line of empirical and mixed empirical-theoretical work on the likelihood of ties under the US electoral college system [GKB98,GSE12]. Studying the smoothed likelihood of ties under these settings are left for future work. ...
Preprint
Understanding the likelihood for an election to be tied is a classical topic in many disciplines including social choice, game theory, political science, and public choice. The problem is important not only as a fundamental problem in probability theory and statistics, but also because of its critical roles in many other important issues such as indecisiveness of voting, strategic voting, privacy of voting, voting power, voter turn out, etc. Despite a large body of literature and the common belief that ties are rare, little is known about how rare ties are in large elections except for a few simple positional scoring rules under the i.i.d. uniform distribution over the votes, known as the Impartial Culture (IC) in social choice. In particular, little progress was made after Marchant [Mar01] explicitly posed the likelihood of k-way ties under IC as an open question in 2001. We give an asymptotic answer to the open question for a wide range of commonly-studied voting rules under a model that is much more general and realistic than i.i.d. models including IC--the smoothed social choice framework [Xia20], which was inspired by the celebrated smoothed complexity analysis [ST09]. We prove dichotomy theorems on the smoothed likelihood of ties under a large class of voting rules. Our main technical tool is an improved dichotomous characterization on the smoothed likelihood for a Poisson multinomial variable to be in a polyhedron, which is proved by exploring the interplay between the V-representation and the matrix representation of polyhedra and might be of independent interest.
... В то же время свидетельства 1 Если электорат равен 1 млн человек, то при выборе между двумя кандидатами и р = 0,5 вероятность быть решающим голосующим равна 1 шансу из 1250; если же р = 0,49, то при том же размере электората эта вероятность становится равной 1 шансу из 10 90 (Hamlin, Jennings, 2019, p. 336). Расчет вероятности быть решающим голосующим на президентских выборах в США 2008 г. показал, что она равна 1 шансу из 60 млн (Gelman, Silver, Edlin, 2012). в поддержку рационального поведения сталкиваются и с противоположными наблюдениями. ...
Article
The article examines the evolution of the analysis of voters’ behavior when searching for an answer to the question: Why does the a voter vote? It is shown how the approach to the voter as a rational egoistic investor gave rise to what is commonly called the “voter’s paradox” in political and economic theory. Further search was aimed at explaining this paradox. On the one hand, the concept of an expressive voter appears, who expresses himself through participation in elections, on the other hand, we are talking about an altruistic voter who overcomes egoism. The latest theoretical finding was the explanation of participation in voting by attracting “relational goods” that differ in their qualities from both public and private goods. With this approach, the “voter’s paradox” finds the most consistent solution. And it is in this approach the shift from methodological individualism to institutional individualism is most clearly manifested. The authors of the article highlight this shift as a new trend in explaining the reasons for voting. At the same time, it is argued that the considered conceptual diversity is a reflection of the multidimensional features of human nature, and it is this fact that gives rise to the ambiguity and contradiction of experimental results.
... Costs are defined broadly, and may include time, effort, resources, money, discomfort, and physical harm. For example, it is cooperative to pay the full amount due on your taxes although the likelihood of being caught for underreporting income is low (Spicer & Thomas, 1982), to vote although your individual ballot will not change the election's outcome (Gelman, Silver, & Edlin, 2012), to join the military and put yourself at risk to protect your country, and to volunteer your time at (and donate your money to) charitable organizations. At a more personal level, it is also cooperative to lend money to a friend, to help your neighbors move into their house, to do your fair share on a group project in the workplace, and to collaborate in good faith with someone, although you have different beliefs or opinions. ...
Article
Full-text available
How can we maximize the common good? This is a central organizing question of public policy design, across political parties and ideologies. The answer typically involves the provisioning of public goods such as fresh air, national defense, and knowledge. Public goods are costly to produce but benefit everyone, thus creating a social dilemma: Individual and collective interests are in tension. Although individuals may want a public good to be produced, they typically would prefer not to be the ones who have to pay for it. Understanding how to motivate individuals to pay these costs is therefore of great importance for policy makers. Research provides advice on how to promote this type of “cooperative” behavior. Synthesizing a large body of research demonstrates the power of “reciprocity” for inducing cooperation: When others know that you have helped them, or acted to benefit the greater good, they are often more likely to reciprocate and help you in turn. Several conclusions stem from this line of thinking: People will be more likely to do their part when their actions are observable by others; people will pay more attention to how effective those actions are when efficacy is also observable; people will try to avoid situations where they could help, but often will help if asked directly; people are more likely to cooperate if they think others are also cooperating; and people can develop habits of cooperation that shape their default inclinations.
... The ability of privileged interests to exert influence on policy is exacerbated by limitations on voting as means of expressing individual preferences and rewarding, or punishing, elected officials (Buchanan, 1954b;Miller III, 1999). Because the probability of any single vote influencing the outcome of a political election is typically miniscule, voters face weak incentives to obtain detailed political information about their representatives, which is necessary for holding them accountable (see Bohanon and Van Cott, 2002;Brennan, 2016;Downs, 1957;Gelman et al., 2012;Heckelman, 2003;Somin, 2013). ...
Article
How can public policy best deal with infectious disease? In answering this question, scholarship on the optimal control of infectious disease adopts the model of a benevolent social planner who maximizes social welfare. This approach, which treats the social health planner as a unitary “public health brain” standing outside of society, removes the policymaking process from economic analysis. This paper opens the black box of the social health planner by extending the tools of economics to the policymaking process itself. We explore the nature of the economic problem facing policymakers and the epistemic constraints they face in trying to solve that problem. Additionally, we analyze the incentives facing policymakers in their efforts to address infectious diseases and consider how they affect the design and implementation of public health policy. Finally, we consider how unanticipated system effects emerge due to interventions in complex systems, and how these effects can undermine well‐intentioned efforts to improve human welfare. We illustrate the various dynamics of the political economy of state responses to infectious disease by drawing on a range of examples from the COVID‐19 pandemic.
... Costs are defined broadly, and may include time, effort, resources, money, discomfort, and physical harm. For example, it is cooperative to pay the full amount due on your taxes although the likelihood of being caught for underreporting income is low (Spicer & Thomas, 1982), to vote although your individual ballot will not change the election's outcome (Gelman, Silver, & Edlin, 2012), to join the military and put yourself at risk to protect your country, and to volunteer your time at (and donate your money to) charitable organizations. At a more personal level, it is also cooperative to lend money to a friend, to help your neighbors move into their house, to do your fair share on a group project in the workplace, and to collaborate in good faith with someone, although you have different beliefs or opinions. ...
Article
How can we maximize the common good? This is a central organizing question of public policy design, across political parties and ideologies. The answer typically involves the provisioning of public goods such as fresh air, national defense, and knowledge. Public goods are costly to produce but benefit everyone, thus creating a social dilemma: Individual and collective interests are in tension. Although individuals may want a public good to be produced, they typically would prefer not to be the ones who have to pay for it. Understanding how to motivate individuals to pay these costs is therefore of great importance for policy makers. Research provides advice on how to promote this type of “cooperative” behavior. Synthesizing a large body of research demonstrates the power of “reciprocity” for inducing cooperation: When others know that you have helped them, or acted to benefit the greater good, they are often more likely to reciprocate and help you in turn. Several conclusions stem from this line of thinking: People will be more likely to do their part when their actions are observable by others; people will pay more attention to how effective those actions are when efficacy is also observable; people will try to avoid situations where they could help, but often will help if asked directly; people are more likely to cooperate if they think others are also cooperating; and people can develop habits of cooperation that shape their default inclinations.
... SeeBarro (1973),Besley (2006),Congleton (2007),Ferejohn (1986),Fukuyama (2011),Levison and Sachs (2015),Maravall (2003),North et al. (2009), Petracca (1996,Tilly (2004), andVoigt (1999). 2 For the public choice problems inherent in electoral representation, seeAcemoglu and Robinson (2006),Acemoglu et al. (2013),Achen and Bartels (2016),Bardhan (1997),Besley (2006, Ch. 2),Brennan (2011).Buchanan andTullock (1962[1999]),Caplan (2007),Downs (1957),Drazen (2000, pp. 23-30),Holcombe and Gwartney (1989, p. 669-70),Gelman et al. (2009), Mueller (2003,Olson (1982),Sappington (1991),Somin (2013), andVanberg and Buchanan (2001). For a critique of these public choice problems, and a defense of electoral representation, seeDahl (1989),Udehn (1996), andWittman (1995). ...
Article
Full-text available
This paper provides a theoretical explanation for the adoption of turn-taking in office. Turn-taking in office is where two or more individuals are elected to serve individual terms for the same public office, with the exclusive right to exercise the public office rotating among those elected individuals at intervals shorter than the term. Turn-taking enables the benefits of shorter tenures to be realized without incurring, to the same extent, the costs associated with setting an equivalently shortened term and term limit regime. Turn-taking would be most likely to emerge among a factional electorate in order to generate support for shared governance institutions. A case study of three high-level public offices in the Republic of Venice provides evidence of the operation of turn-taking. The tripartite presidency of Bosnia and Herzegovina provides a modern-day example of some aspects of office turn-taking in operation.
... But note that any attempt by an individual to further such an arrangement is a mere drop in the ocean. To take the simplest case, voting for the more egalitarian candidate has between a 1 in 10 million chance and a 1 in a billion chance of changing the outcome of the presidential election (Gelman et al. 2012). In other words, your vote for the higher-tax candidate is a drop in the ocean just as your donation is. ...
Article
Full-text available
G.A. Cohen famously claims that egalitarians shouldn’t be so rich. If you possess excess income and there is little chance that the state will redistribute it to the poor, you are obligated to donate it yourself. We argue that this conclusion is correct, but that the case against the rich egalitarian is significantly stronger than the one Cohen offers. In particular, the standard arguments against donating one’s excess income face two critical, unrecognized problems. First, we show that these arguments imply that citizens have no duty to further egalitarian political institutions—a conclusion that Cohen’s Rawlsian opponents cannot abide. Second, these arguments yield unacceptable implications for other questions of justice. We conclude that even moderately rich egalitarians are obligated to donate their excess income.
... Given enough such information, voting can be a CAP. For example, in 2008 the odds that one voter in the swing state of Colorado would make a difference to the presidential election were about 1 in 10 million, but in states dominated by one party, such as California or Georgia, the odds were 1 in 10 billion (Gelman et al. 2012). If the harm of electing the wrong candidate were equivalent to billions of dollars lost, the expected utility of voting would be relatively high in Colorado and relatively low in California or Georgia. ...
Article
Full-text available
Enormous harms, such as climate change, often occur as the result of large numbers of individuals acting separately. In collective action problems, an individual has so little chance of making a difference to these harms that changing their behavior has insignificant expected utility. Even so, it is intuitive that individuals in many collective action problems should not be parts of groups that cause these great harms. This paper gives an account of when we do and do not have obligations to change our behavior in collective action problems. It also addresses a question insufficiently explored in the literature on this topic: when obligations arising out of collective action problems conflict with other obligations, what should we do? The paper explains how to adjudicate conflicts involving two collective action problems and conflicts involving collective action problems and other sorts of obligations.
Article
We have noticed a pattern of arguments that exhibit a type of irrationality or a particular informal logical fallacy that is not fully captured by any existing fallacy. This fallacy can be explored through three examples where one misattributes a cause by focusing on a smaller portion of a larger set—specifically, the last or least known—and claiming that that cause holds a unique priority over other contributing factors for the occurrence of an event. We propose to call this fallacy the “last straw fallacy” and will argue why these examples actually warrant a new logical name. Finally, we will show how these cases point to a deeper insight about the contexts in which we typically invoke this type of reasoning and some significant harmful consequences of doing so.
Article
Ilya Somin's Democracy and Political Ignorance represents a missed opportunity to fully examine the implications of public ignorance in modern democracies. Somin persuasively argues that existing levels of public ignorance undermine the main normative accounts of democratic legitimacy, and he demonstrates that neither cognitive shortcuts nor heuristics can provide a quick fix for democracy. However, Somin seeks to find a simple explanation for public ignorance in the conscious, rational choices of voters. He thus commits to the position that voters choose to be ignorant and irrational—and to the simplistic implication that given the right incentives they would choose otherwise. This position is empirically problematic, methodologically flawed, and theoretically redundant. On the more plausible view that ignorance is the inadvertent result of social complexity, it is clear that simply focusing on incentives tells us little about what voters would or would not know under different institutional circumstances.
Chapter
Why do we think it’s wrong to treat people merely as a means to end? Why do we consider lies of omission less immoral than lies of commission? Why do we consider it good to give, regardless of whether the gift is effective? We use four simple game theoretic models—the Coordination Game, the Hawk–Dove game, Repeated Prisoner’s Dilemma, and the Envelope Game—to shed light on these and other puzzling aspects of human morality. We also justify the use of game theory for the study of morality and explore implications for group selection and moral realism.
Article
US political reporting has become extraordinarily rich in polling data. However, this increase in information availability has not been matched by an improvement in the accuracy of poll-based news stories, which usually examine a single survey at a time, rather than providing an aggregated, more accurate view. In 2004, I developed a meta-analysis that reduced the polling noise for the Presidential race by reducing all available state polls to a snapshot at a single time, known as the Electoral Vote estimator. Assuming that Presidential pollsters are accurate in the aggregate, the snapshot has an accuracy equivalent to less than 0.5% in the national popular-vote margin. The estimator outperforms both the aggregator FiveThirtyEight and the betting market InTrade. Complex models, which adjust individual polls and employ pre-campaign “fundamental” variables, improve the accuracy in individual states but provide little or no advantage in overall performance, while at the same time reducing transparency. A polls-only snapshot can also identify shifts in the race, with a time resolution of a single day, thus assisting in the identification of discrete events that influence a race. Finally, starting at around Memorial Day, variations in the polling snapshot over time are sufficient to enable the production of a high-quality, random-drift-based prediction without a need for the fundamentals that are traditionally used by political science models. In summary, the use of polls by themselves can capture the detailed dynamics of Presidential races and make predictions. Taken together, these qualities make the meta-analysis a sensitive indicator of the ups and downs of a national campaign—in short, a precise electoral thermometer.
Article
Many forms of political communication are thought to be expressive rather than instrumental. We present evidence suggesting the presence of a perceived instrumental benefit of individual political communication. Subjects may send a sometimes costly comment on another person's choice of how to donate to two rival political groups. Subjects who may comment before the choice is made--when they may have some persuasive impact--are more than twice as likely to comment as those who may only send a message after the decision. When the timing of the messages—pre- or post-decision—remains fixed, but the experimenter alone receives the message, there is no difference in the proportion of messages sent pre- and post-decision. Moreover, most of the comments made to the decision-maker prior to the decision use an imperative construction, while few do so after the decision is made or when writing to the experimenter alone. Taken together, these results indicate that political expression is both expressive and instrumental.
Article
Debates over the relationship between exit and voice in politics have focused on the quantity of citizen voice and its effectiveness in influencing public decisions. The epistemic quality of voice, on the other hand, has received much less attention. This article uses rational choice theory to argue that public sector exit options can lead to more informed and less biased expressions of voice. Whereas voters have weak incentives to gather and process information, exit options provide sharper epistemic incentives to produce knowledge which can spill over into voting decisions. Exit can thus improve democratic competence.
Article
The democratic egalitarian ideal requires that everyone should enjoy equal power over the world through voting. If it is improper to vote twice in the same election, why should it be permissible for dual citizens to vote in two different places? Several possible excuses are considered and rejected.
Chapter
Young citizens in modern liberal democratic societies are subject to various limitations on their rights and responsibilities that other citizens are exempt from. In particular, their criminal liability is lessened comparative to other citizens, and their entitlement to make medical and political decisions is reduced. In each of these domains, the justification for the differential treatment of the young is their incapacity. However, the time and methods with which capacity is attributed to young people differ between the medical, criminal and political domains. I argue that modern liberal democratic states owe to young citizens a consistent recognition of their capacity for autonomous decision-making, and that this recognition requires the legal status of young citizens to be updated and standardized over the domains under consideration. This requirement is not commonly satisfied by democratic societies, as the way in which their capacities are judged is inconsistent between the three domains under consideration.
Preprint
Full-text available
The notion of belief likelihood function of repeated trials is introduced, whenever the uncertainty for individual trials is encoded by a belief measure (a finite random set). This generalises the traditional likelihood function, and provides a natural setting for belief inference from statistical data. Factorisation results are proven for the case in which conjunctive or disjunctive combination are employed, leading to analytical expressions for the lower and upper likelihoods of `sharp' samples in the case of Bernoulli trials, and to the formulation of a generalised logistic regression framework.
What role do whistleblowers play in democratic politics? This paper answers this question by analyzing the political economy of whistleblowing within democratic political institutions. Democratic politics is characterized by numerous principal-agent problems creating significant space for opportunism. Whistleblowers help to resolve these principal-agent problems through the revelation of information regarding abuses of power. These revelations can take place internally, by taking advantage of channels to report abuse, or externally, by publicly revealing information. The latter is especially important where internal mechanisms for reporting opportunism are lacking. Whistleblowing in the US national security state is presented to illustrate this logic.
Chapter
Full-text available
Voting and other forms of political participation are not, from a cost-benefit analysis, in the self-interest of individuals. To overcome this problem, the homo economicus model has been modified by three alternative assumptions: weak altruism, civic duty, and/or expressive behavior. I argue that only weak altruism is consistent with the observed facts of behavior: strategic voting, increased turnout in close elections, the acquisition of political information, campaign contributions, and contributing to public interest groups.
Article
We develop a model in which costly voting in a large, two‐party election is a sequentially rational choice of strategic, self‐interested players who can reward fellow voters by forming stronger ties in a network formation coordination game. The predictions match a variety of stylized facts, including explaining why an individual's voting behavior may depend on what she knows about her friends' actions. Players have imperfect information about others' voting behavior, and we find that some degree of privacy may be necessary for voting in equilibrium, enabling hypocritical but useful social pressure. Our framework applies to any costly prosocial behavior. This article is protected by copyright. All rights reserved
Article
Full-text available
This article discusses some ethical questions raised by multiple citizenship and, more generally, citizenship as we know it. Despite a richness of legal and sociological discussions of multiple citizenship, purely ethical inquiry into multiple citizenship is still in its infancy. The aim here is not to provide a literature review of the further‐flung scholarship on this topic, but rather to point out that multiple citizenship is a topic worthy of specifically philosophical inquiry, and to show how it relates to existing debates within political philosophy.
Research
Since 2000, ten states have enacted strict voter identification laws, which require that voters show identification in order for their votes to count. While proponents argue these laws prevent voter fraud and protect the integrity of elections, opponents argue they disenfranchise low-income and minority voters. In this paper, we document the extent to which these laws can affect voter turnout and election outcomes. We do so using historical data on more than 2,000 races in Florida and Michigan, which both allow and track ballots cast without identification. Results indicate that at most only 0.10% and 0.31% of total votes cast in each state were cast without IDs. Thus, even under the extreme assumption that all voters without IDs were either fraudulent or would be disenfranchised by a strict law, the enactment of such a law would have only a very small effect on turnout. Similarly, we also show under a range of conservative assumptions that very few election results could have been flipped due to a strict law. Collectively, our findings indicate that even if the worst fears of proponents or critics were true, strict identification laws are unlikely to have a meaningful impact on turnout or election outcomes.
Chapter
This paper is dedicated to the measurement of (or lack of) electoral justice in the 2010 Electoral College using a methodology based on the expected influence of the vote of each citizen for three probability models. Our first contribution is to revisit and reproduce the results obtained by Owen (1975) for the 1960 and 1970 Electoral College. His work displays an intriguing coincidence between the conclusions drawn, respectively, from the Banzhaf and Shapley-Shubik’s probability models. Both probability models conclude to a violation of electoral justice at the expense of small states. Our second contribution is to demonstrate that this conclusion is completely flipped upside down when we use May’s probability model: This model leads instead to a violation of electoral justice at the expense of large states. Besides unifying disparate approaches through a common measurement methodology, one main lesson of the paper is that the conclusions are sensitive to the probability models which are used and in particular to the type and magnitude of correlation between voters that they carry.
Chapter
Full-text available
Why should we refrain from doing things that, taken collectively, are environmentally destructive, if our individual acts seem almost certain to make no difference? According to the expected consequences approach, we should refrain from doing these things because our individual acts have small risks of causing great harm, which outweigh the expected benefits of performing them. Several authors have argued convincingly that this provides a plausible account of our moral reasons to do things like vote for policies that will reduce our countries’ greenhouse gas emissions, adopt plant-based diets, and otherwise reduce our individual emissions. But this approach has recently been challenged by authors like Bernward Gesang and Julia Nefsky. Gesang contends that it may be genuinely impossible for our individual emissions to make a morally relevant difference. Nefsky argues more generally that the expected consequences approach cannot adequately explain our reasons not to do things if there is no precise fact of the matter about whether their outcomes are harmful. In the following chapter, author Howard Nye defends the expected consequences approach against these objections. Nye contends that Gesang has shown at most that our emissions could have metaphysically indeterministic effects that lack precise objective chances. He argues, moreover, that the expected consequences approach can draw upon existing extensions to cases of indeterminism and imprecise probabilities to deliver the result that we have the same moral reasons to reduce our emissions in Gesang’s scenario as in deterministic scenarios. Nye also shows how the expected consequences approach can draw upon these extensions to handle Nefsky’s concern about the absence of precise facts concerning whether the outcomes of certain acts are harmful. The author concludes that the expected consequences approach provides a fully adequate account of our moral reasons to take both political and personal action to reduce our ecological footprints.
Article
Will Douglas VanDerwerken change the world? He becomes converted to voting — and works out the odds that his vote will be the crucial all-important pivotal decider that determines who becomes President.
Article
Full-text available
On the eve of the election, the impending result of the presidential vote can be seen fairly clearly from trial-heat polls. Earlier in the election year, the polls offer much less information about what will happen on Election Day (see Campbell 2008; Wlezien and Erikson 2002). The polls capture preferences to the moment and do not—because they cannot—anticipate how preferences will evolve in the future, as the campaign unfolds. Various things ultimately impact the final vote. The standing of the sitting president is important. The economy is too. Both can change as the election cycle evolves. To make matters worse, late-arriving economic shocks have a bigger impact on the electoral verdict than those that arrive earlier. This complicates accurately forecasting the vote well in advance.
Article
Full-text available
Voting power indexes such as that of Banzhaf are derived, explicitly or implicitly, from the assumption that all votes are equally likely (i.e., random voting). That assumption implies that the probability of a vote being decisive in a jurisdiction with n voters is proportional to 1/rootn. In this article the authors show how this hypothesis has been empirically tested and rejected using data from various US and European elections. They find that the probability of a decisive vote is approximately proportional to I In. The random voting model (and, more generally, the square-root rule) overestimates the probability of close elections in larger jurisdictions. As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than they really are. The most important political implication of their result is that proportionally weighted voting systems (that is, each jurisdiction gets a number of votes proportional to n) are basically fair. This contradicts the claim in the voting power literature that weights should be approximately proportional to rootn.
Article
Full-text available
This paper applies a game-theoretic model of participation under uncertainty to investigate the negative relationship between constituency size and voter turnout rates: theconstituency size effect. We find that this theoretical model accounts for almost all of the variation in turnout due to size in cross sectional data from school budget referenda.
Article
Full-text available
Researchers sometimes argue that statisticians have little to contribute when few realizations of the process being estimated are observed. We show that this argument is incorrect even in the extreme situation of estimating the probabilities of events so rare that they have never occurred. We show how statistical forecasting models allow us to use empirical data to improve inferences about the probabilities of these events. Our application is estimating the probability that your vote will be decisive in a U.S. presidential election, a problem that has been studied by political scientists for more than two decades. The exact value of this probability is of only minor interest, but the number has important implications for understanding the optimal allocation of campaign resources, whether states and voter groups receive their fair share of attention from prospective presidents, and how formal "rational choice" models of voter behavior might be able to explain why people vote at all. We show how the probability of a decisive vote can be estimated empirically from state-level forecasts of the presidential election and illustrate with the example of 1992. Based on generalizations of standard political science forecasting models, we estimate the (prospective) probability of a single vote being decisive as about 1 in 10 million for close national elections such as 1992, varying by about a factor of 10 among states. Our results support the argument that subjective probabilities of many types are best obtained through empirically based statistical prediction models rather than solely through mathematical reasoning. We discuss the implications of our findings for the types of decision analyses used in public choice studies. Version of Record
Article
Full-text available
Presidential election outcomes are well explained by just two objectively measured fundamental determinants: (1) weighted-average growth of per capita real personal disposable income over the term, and (2) cumulative US military fatalities owing to unprovoked, hostile deployments of American armed forces in foreign conflicts. The US economy weakened at the beginning of 2008 and average per capita real income growth probably will be only around 0.75% at Election Day. Moreover cumulative US military fatalities in Iraq will reach 4,300 or more. Given those fundamental conditions, the Bread and Peace model predicts a Republican two-party vote share centered on 48.2%.
Article
Full-text available
Voting power indexes such as that of Banzhaf are derived, explicitly or implicitly, from the assumption that all votes are equally likely (i.e., random voting). That assumption implies that the probability of a vote being decisive in a jurisdiction with n voters is proportional to 1/√n. In this article the authors show how this hypothesis has been empirically tested and rejected using data from various US and European elections. They find that the probability of a decisive vote is approximately proportional to 1/n. The random voting model (and, more generally, the square-root rule) overestimates the probability of close elections in larger jurisdictions. As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than they really are. The most important political implication of their result is that proportionally weighted voting systems (that is, each jurisdiction gets a number of votes proportional to n) are basically fair. This contradicts the claim in the voting power literature that weights should be approximately proportional to √n.
Article
Full-text available
Researchers sometimes argue that statisticians have little to contribute when few realizations of the process being estimated are observed. We show that this argument is incorrect even in the extreme situation of estimating the probabilities of events so rare that they have never occurred. We showhow statistical forecasting models allow us to use empirical data to improve inferences about the probabilities of these events. Our application is estimating the probability that your vote will be decisiveina U.S. presidential election, a problem that has been studied by researchers in political science for more than two decades. The exact value of this probabilityisof only minor interest, but the number has important implications for understanding the optimal allocation of campaign resources, whether states and voter groups receive their fair share of attention from prospective presidents, and how formal #rational choice" models of voter behavior # We thank Curt Signorino for assistance an...
Article
A wide range of potentially useful data are available for election forecasting: the results of previous elections, a multitude of preelection polls, and predictors such as measures of national and statewide economic performance. How accurate are different forecasts? We estimate predictive uncertainty via analysis of data collected from past elections (actual outcomes, preelection polls, and model estimates). With these estimated uncertainties, we use Bayesian inference to integrate the various sources of data to form posterior distributions for the state and national two-party Democratic vote shares for the 2008 election. Our key idea is to separately forecast the national popular vote shares and the relative positions of the states. More generally, such an approach could be applied to study changes in public opinion and other phenomena with wide national swings and fairly stable spatial distributions relative to the national average.
Article
The t distribution provides a useful extension of the normal for statistical modeling of data sets involving errors with longer-than-normal tails. An analytical strategy based on maximum likelihood for a general model with multivariate t errors is suggested and applied to a variety of problems, including linear and nonlinear regression, robust estimation of the mean and covariance matrix with missing data, unbalanced multivariate repeated-measures data, multivariate modeling of pedigree data, and multivariate nonlinear regression. The degrees of freedom parameter of the t distribution provides a convenient dimension for achieving robust statistical inference, with moderate increases in computational complexity for many models. Estimation of precision from asymptotic theory and the bootstrap is discussed, and graphical methods for checking the appropriateness of the t distribution are presented.
Article
This symposium presents 10 articles forecasting the 2008 U.S. national elections. The core of this collection is the seven presidential-vote forecasting models that were presented in this space before the 2004 election. Added to that group are one additional presidential forecasting model, one state-level elections forecasting model, and one model forecasting the relationship between congressional votes and seats won by the parties. Some of the articles that are focused on the presidential race have also taken the opportunity to forecast the congressional elections as well.
Article
The efficacy of a vote by an individual citizen in an election may be defined roughly as the expected effect it has on the outcome of the election. This is a measure of how much that voter contributes to the decision making of the social system in which he is. A number of political scientists have rightly claimed that the efficacy of a vote is small when the electorate is large. We argue here that they have somewhat misdefined an important parameter of the problem and we strengthen their work by means of a Bayesian analysis. Many people derive personal utility from the act of voting quite apart from the efficacy of the vote as such. This analysis depends primarily on the voter's estimate of the probability that his vote will either produce or resolve a tie. The asymptotic form for this probability is quite different from one that has appeared in the literature. The probability is tabulated on the assumption of a beta prior and the problem of choosing the parameters in this distribution is analyzed.
Article
A wide range of potentially useful data are available for election forecasting: the results of previous elections, a multitude of preelection polls, and predictors such as measures of national and statewide economic performance. How accurate are different forecasts? We estimate predictive uncertainty via analysis of data collected from past elections (actual outcomes, preelection polls, and model estimates). With these estimated uncertainties, we use Bayesian inference to integrate the various sources of data to form posterior distributions for the state and national two-party Democratic vote shares for the 2008 election. Our key idea is to separately forecast the national popular vote shares and the relative positions of the states. More generally, such an approach could be applied to study changes in public opinion and other phenomena with wide national swings and fairly stable spatial distributions relative to the national average.
Article
Scholars have recently reworked the traditional calculus of voting model by adding a term for benefits to others. Although the probability that a single vote affects the outcome of an election is quite small, the number of people who enjoy the benefit when the preferred alternative wins is large. As a result, people who care about benefits to others and who think one of the alternatives makes others better off are more likely to vote. I test the altruism theory of voting in the laboratory by using allocations in a dictator game to reveal the degree to which each subject is concerned about the well-being of others. The main findings suggest that variation in concern for the well-being of others in conjunction with strength of party identification is a significant factor in individual turnout decisions in real world elections. Partisan altruists are much more likely to vote than their nonpartisan or egoist peers.
Article
For voters with "social" preferences, the expected utility of voting is approximately independent of the size of the electorate, suggesting that rational voter turnouts can be substantial even in large elections. Less important elections are predicted to have lower turnout, but a feedback mechanism keeps turnout at a reasonable level under a wide range of conditions. The main contributions of this paper are: (1) to show how, for an individual with both selfish and social preferences, the social preferences will dominate and make it rational for a typical person to vote even in large elections;(2) to show that rational socially-motivated voting has a feedback mechanism that stabilizes turnout at reasonable levels (e.g., 50% of the electorate); (3) to link the rational social-utility model of voter turnout with survey findings on socially-motivated vote choice.
Article
Some economic theories of voting suggest that competition leads to close elections, and that election closeness is a factor for bringing voters to the polls. How often in fact are civic elections decided by one vote? One of every 89,000 votes cast in U.S. Congressional elections, and one of 15,000 in state legislator elections, "mattered" in the sense that they were cast for a candidate that tied or won by one vote. We find an inverse relationship between election size and the frequency of one vote margins. Recounts, and other margin-specific election procedures, are determinants of the pivotal vote frequency. Copyright 2003 by Kluwer Academic Publishers
Article
This paper analyzes how US presidential candidates should allocate resources across states to maximize the probability of winning the election, by developing and estimating a probabilistic-voting model of political competition under the Electoral College system. Actual campaigns act in close agreement with the model. There is a 0.9 correlation between equilibrium and actual presidential campaign visits across states, both in 2000 and 2004. The paper shows how presidential candidate attention is affected by the states' number of electoral votes, forecasted state-election outcomes, and forecast uncertainty. It also analyzes the effects of a direct national popular vote for president.
United States Elections Project://elections.gmu
  • M Mcdonald
  • M Mcdonald
United States Elections Project The Empirical Fre-quency of a Pivotal Vote
  • M Mcdonald
  • C B Mulligan
  • C G Hunter
McDonald, M. " United States Elections Project. " 2009. Accessed 20 Feb 2009. http://elections.gmu.edu/ Mulligan, C. B., and C. G. Hunter. " The Empirical Fre-quency of a Pivotal Vote. " Public Choice, 116, 2003, 31–54.
Election 2008: What Really Happened? And What Does it Mean
  • A Gelman
  • J Sides
Gelman, A., and J. Sides. " Election 2008: What Really Happened? And What Does it Mean? " Technical report, Department of Statistics, Columbia University, 2009.
United States Elections Project
  • M Mcdonald
McDonald, M. (2009). United States elections project. http://elections.gmu.edu/ 7
Sides “Election 2008: What Really Happened? And What Does it Mean?
  • A J Gelman
Frequently Asked Questions
  • N Silver