Project

PollyVote

Goal: The PollyVote project uses the high-profile application of predicting U.S. presidential election outcomes to demonstrate advances in forecasting research. When the PollyVote was first launched in 2004, the original goal was to demonstrate the benefits of combining forecasts. Since then, the PollyVote team has expanded its focus by analyzing the value of new forecasting methods such as expectation surveys and index models.

With the 2016 election, the PollyVote uses natural language generation technology to produce automated news based on the forecasting data. The goal of this project is to study how users perceive automated news for such a high-involvement topic that incorporates uncertainty rather than facts.

Updates
0 new
8
Recommendations
0 new
1
Followers
0 new
42
Reads
4 new
871

Project log

J. Scott Armstrong
added 3 research items
Rosenstone develops a causal model to forecast political voting. The model seems reasonable; for example,it includes information about party, key issues, the economy, war, incumbency, region, and trends over time.Standard econometric methods are then used to determine how much weight should be given to each factor. The conditions are then forecasted for each of the 50 states, and the weights are applied to give state-by-state forecasts.Aggregation across states provides forecasts of both the popular and electoral votes for presidential elections.
Rosenstone develops a causal model to forecast political voting. The model seems reasonable; for example, it includes information about party, key issues, the economy, war, incumbency, region, and trends over time. Standard econometric methods are then used to determine how much weight should be given to each factor. The conditions are then forecasted for each of the 50 states, and the weights are applied to give state-by-state forecasts. Aggregation across states provides forecasts of both the popular and electoral votes for presidential elections.
J. Scott Armstrong
added 3 research items
Prior research found that people’s assessments of relative competence predicted the outcome of Senate and Congressional races. We hypothesized that snap judgments of "facial competence" would provide useful forecasts of the popular vote in presidential primaries before the candidates become well known to the voters. We obtained facial competence ratings of 11 potential candidates for the Democratic Party nomination and of 13 for the Republican Party nomination for the 2008 U.S. Presidential election. To ensure that raters did not recognize the candidates, we relied heavily on young subjects from Australia and New Zealand. We obtained between 139 and 348 usable ratings per candidate between May and August 2007. The top-rated candidates were Clinton and Obama for the Democrats and McCain, Hunter, and Hagel for the Republicans; Giuliani was 9th and Thompson was 10th. At the time, the leading candidates in the Democratic polls were Clinton at 38% and Obama at 20%, while Giuliani was first among the Republicans at 28% followed by Thompson at 22%. McCain trailed at 15%. Voters had already linked Hillary Clinton’s competent appearance with her name, so her high standing in the polls met our expectations. As voters learned the appearance of the other candidates, poll rankings moved towards facial competence rankings. At the time that Obama clinched the nomination, Clinton was ahead in the popular vote in the primaries and McCain had secured the Republican nomination with a popular vote that was twice that of Romney, the next highest vote-getter.
Prior research offers a mixed view of the value of expert surveys for long-term election forecasts. On the positive side, experts have more information about the candidates and issues than voters do. On the negative side, experts all have access to the same information. Based on prior literature and on our experiences with the 2004 presidential election and the 2008 campaign so far, we have reason to believe that a simple expert survey (the Nominal Group Technique) is preferable to Delphi. Our survey of experts in American politics was quite accurate in the 2004 election. Following the same procedure, we have assembled a new panel of experts to forecast the 2008 presidential election. Here we report the results of the first survey, and compare our experts’ forecasts with predictions by the Iowa Electronic Market .
The outcome of the 2004 presidential election was forecast by applying the combination principle, a procedure which in other contexts has been shown to reduce forecast error. This forecasting technique involved averaging within and across four categories of methods (polls, Iowa Electronic Market quotes, quantitative models, and a Delphi survey of experts on American politics) to compute a combined forecast of the incumbent's share of the two-party vote. We called the resulting average the Pollyvote, because it was derived from many methods and applied to a political phenomenon. When tested across the 163 days preceding the election, the mean absolute error of the Pollyvote predictions was only three-fourths as large as the error of the next most accurate method. Gains in error reduction were achieved for all forecast horizons. On the morning of the election, the Pollyvote predicted a Bush victory with 51.5 percent of the two-party vote, which came within 0.2 percent of the outcome (51.3%).
Andreas Graefe
added 3 research items
Problem: Multiple regression analysis (MRA) is commonly used to develop forecasting models that inform policy and decision making, but the technique does not appear to have been validated for that purpose. Methods: The predictive validity of published least squares MRA models is tested against naive benchmarks, alternative methods that are either plausible or commonly used, and evidence-based forecasting methods. The out-of-sample errors of forecasts from the MRA models are compared with the errors of forecasts from models developed from the same data on the basis of cumulative relative absolute error (CumRAE), and the unscaled mean bounded relative absolute error (UMBRAE). Findings: Results from tests using ten models for diverse problems found that while the MRA models performed well against most of the alternatives tested for most problems, out-of-sample (n-1) forecasts from models estimated using least absolute deviation were mostly more accurate. Originality: This paper presents the first stage of a project to comprehensively test the predictive validity of MRA relative to models derived using diverse alternative methods. Usefulness: The findings of this research will be useful whether they turn out to support or reject the use of MRA models for important policy and decision-making tasks. Validation of MRA for forecasting would provide a stronger argument for the use of the method than is currently available, while the opposite finding would identify opportunities to improve forecast accuracy and hence decisions.
A Recap of the 2016 Election Forecasts - Volume 50 Issue 2 - James E. Campbell, Helmut Norpoth, Alan I. Abramowitz, Michael S. Lewis-Beck, Charles Tien, James E. Campbell, Robert S. Erikson, Christopher Wlezien, Brad Lockerbie, Thomas M. Holbrook, Bruno Jerôme, Véronique Jerôme-Speziari, Andreas Graefe, J. Scott Armstrong, Randall J. Jones, Alfred G. Cuzán
Problem Do conservative econometric models that comply with the Golden Rule of Forecasting pro- vide more accurate forecasts? Methods To test the effects of forecast accuracy, we applied three evidence-based guidelines to 19 published regression models used for forecasting 154 elections in Australia, Canada, Italy, Japan, Netherlands, Portugal, Spain, Turkey, U.K., and the U.S. The guidelines direct fore- casters using causal models to be conservative to account for uncertainty by (I) modifying effect estimates to reflect uncertainty either by damping coefficients towards no effect or equalizing coefficients, (II) combining forecasts from diverse models, and (III) incorporating more knowledge by including more variables with known important effects. Findings Modifying the econometric models to make them more conservative reduced forecast errors compared to forecasts from the original models: (I) Damping coefficients by 10% reduced error by 2% on average, although further damping generally harmed accuracy; modifying coefficients by equalizing coefficients consistently reduced errors with average error reduc- tions between 2% and 8% depending on the level of equalizing. Averaging the original regression model forecast with an equal-weights model forecast reduced error by 7%. (II) Combining forecasts from two Australian models and from eight U.S. models reduced error by 14% and 36%, respectively. (III) Using more knowledge by including all six unique vari- ables from the Australian models and all 24 unique variables from the U.S. models in equal- weight “knowledge models” reduced error by 10% and 43%, respectively. Originality This paper provides the first test of applying guidelines for conservative forecasting to estab- lished election forecasting models. Usefulness Election forecasters can substantially improve the accuracy of forecasts from econometric models by following simple guidelines for conservative forecasting. Decision-makers can make better decisions when they are provided with models that are more realistic and fore- casts that are more accurate.
Andreas Graefe
added a research item
The present study reviews the accuracy of four methods (polls, prediction markets, expert judgment, and quantitative models) for forecasting the two German federal elections in 2013 and 2017. On average across both elections, polls and prediction markets were most accurate, while experts and quantitative models were least accurate. The accuracy of individual forecasts did not correlate across elections. That is, methods that were most accurate in 2013 did not perform particularly well in 2017. A combined forecast, calculated by averaging forecasts within and across methods, was more accurate than two out of three component forecasts. The results conform to prior research on US presidential elections in showing that combining is effective in generating accurate forecasts and avoiding large errors.
Andreas Graefe
added a research item
This study analyzes the relative accuracy of experts, polls, and the so-called ‘fundamentals’ in predicting the popular vote in the four U.S. presidential elections from 2004 to 2016. Although the majority (62%) of 452 expert forecasts correctly predicted the directional error of polls, the typical expert’s vote share forecast was 7% (of the error) less accurate than a simple polling average from the same day. The results further suggest that experts follow the polls and do not sufficiently harness information incorporated in the fundamentals. Combining expert forecasts and polls with a fundamentals-based reference class forecast reduced the error of experts and polls by 24% and 19%, respectively. The findings demonstrate the benefits of combining forecasts and the effectiveness of taking the outside view for debiasing expert judgment.
Andreas Graefe
added an update
We are looking for an expert in web development / data visualization. If you have those skills, please get in touch. If not, please share this within your network.
 
Andreas Graefe
added an update
I gave an interview on the PollyVote and German election forecasting in the magazine "Planung & Analyse" (in German): http://www.horizont.net/planung-analyse/nachrichten/Online-Special-Wahlforschung-Wahlforschung-ist-mehr-als-die-Sonntagsfrage--159879
 
Andreas Graefe
added a research item
The PollyVote uses evidence-based techniques for forecasting the popular vote in presidential elections. The forecasts are derived by averaging existing forecasts generated by six different forecasting methods. In 2016, the PollyVote correctly predicted that Hillary Clinton would win the popular vote. The 1.9 percentage-point error across the last 100 days before the election was lower than the average error for the six component forecasts from which it was calculated (2.3 percentage points). The gains in forecast accuracy from combining are best demonstrated by comparing the error of PollyVote forecasts with the average error of the component methods across the seven elections from 1992 to 2016. The average errors for last 100 days prior to the election were: public opinion polls (2.6 percentage points), econometric models (2.4), betting markets (1.8), and citizens’ expectations (1.2); for expert opinions (1.6) and index models (1.8), data were only available since 2004 and 2008, respectively. The average error for PollyVote forecasts was 1.1 percentage points, lower than the error for even the most accurate component method.
Andreas Graefe
added an update
We are happy and proud to announce that we received funding from the Google Digital News Initiative to work on computational campaign coverage based on the PollyVote. The goal of the project is to help newsrooms to improve their election forecasting. We will develop a fully automated and adaptable prediction platform that offers evidence-based election forecasts for potentially any election worldwide. The forecasts will be accompanied by automatically generated narratives and visualizations in multiple languages and for different channels and target audiences, which put the forecasts in context and appropriately communicate their underlying uncertainty in an objective manner.
 
Andreas Graefe
added 14 research items
In the PollyVote, we evaluated the combination principle to forecast the five U.S. presidential elections between 1992 and 2008. We combined forecasts from three or four different component methods: trial heat polls, the Iowa Electronic Markets (IEM), quantitative models and, in the 2004 and 2008 contests, periodic surveys of experts on American politics. The forecasts were combined within as well as across components. On average, combining within components reduced forecast error – and increased predictive accuracy – by 17% to 40%. Combining across components led to additional error reductions ranging from 7% to 68%, depending on the forecast horizon. In addition, across all five elections, the PollyVote predicted the correct election winner on all but 4 out of 957 days. The gains from applying the combination principle to election forecasting were much larger than those obtained in other fields.
At PoliticalForecasting.com, better known as the Pollyvote, the authors combine forecasts from four sources: election polls, a panel of American political experts, the Iowa Electronic Market, and quantitative models. The day before the election, Polly predicted that the Republican ticket's share of the two-party vote would be 47.0%. The outcome was close at 46.6% (as of the end of November). In his Hot New Research column in this issue, Paul Goodwin discusses the benefits of combining forecasts. The success of the Pollyvote should further enhance interest is this approach to forecasting. Copyright International Institute of Forecasters, 2009
The state of election forecasting has progressed to the point where it is possible to develop highly accurate forecasts for major elections. However, one area that has received little attention is the use of forecasting as an aid to those involved with political campaigns. In the run-up to the presidential primaries, we use the bio-index model to test the chances of potential nominees to defeat President Obama in the 2012 U.S. presidential election. This model uses the index method to incorporate 58 biographical variables (e.g., age, marital status, height, appearance) for making a conditional forecast of the incumbent’s vote-share, depending on who is the opposing candidate. These variables were selected based on received wisdom and findings from prior research. For example, several studies found candidates’ perceived attractiveness or facial competence to be related to their chances of winning an election. The model is particularly valuable for making long-term forecasts of who will win an election and missed the correct winner only twice for the 29 elections from 1896 to 2008. Thus, the model can help candidates to decide whether they should run for office and can advise political interests in deciding whom to support in the primaries and caucuses. The forecasts from the bio-index model are compared to forecasts from polls and prediction markets.
Andreas Graefe
added 2 research items
The usual procedure for developing linear models to predict any kind of target variable is to identify a subset of most important predictors and to estimate weights that provide the best possible solution for a given sample. The resulting “optimally” weighted linear composite is then used when predicting new data. This approach is useful in situations with large and reliable datasets and few predictor variables. However, a large body of analytical and empirical evidence since the 1970s shows that the weighting of variables is of little, if any, value in situations with small and noisy datasets and a large number of predictor variables. In such situations, including all relevant variables is more important than their weighting. These findings have yet to impact many fields. This study uses data from nine established U.S. election-forecasting models whose forecasts are regularly published in academic journals to demonstrate (a) the value of weighting all predictors equally and (b) including all relevant variables in the model. Across the ten elections from 1976 to 2012, equally weighted predictors yielded a lower forecast error than regression weights for six of the nine models. On average, the error of the equal weights models was 5% lower than the error of the original regression models. An equal-weights model that uses all 27 variables that are included in the nine models missed the final results of the ten elections on average by only 1.3 percentage points. This error is 48% lower than the error of the typical, and 29% lower than the error of the most accurate, individual model.
We review the performance of the PollyVote, which combined forecasts from polls, prediction markets, experts’ judgment, political economy models, and index models to forecast the two-party popular vote in the 2012 U.S. Presidential Election. Throughout the election year the PollyVote provided highly accurate forecasts, outperforming each of its component methods, as well as the forecasts from FiveThirtyEight.com. Gains in accuracy were particularly large early in the campaign, when uncertainty about the election outcome is typically high. The results confirm prior research showing that combining is one of the most effective approaches to generating accurate forecasts.
Andreas Graefe
added 4 research items
The Big-Issue Model predicts election outcomes based on voters’ perceptions of candidates’ ability to handle the most important issue. The model provided accurate forecasts of the 2012 U.S. presidential election. The results demonstrate the usefulness of the model in situations where one issue clearly dominates the campaign, such as the state of the economy in the 2012 election. In addition, the model is particularly valuable if economic fundamentals disagree, a situation in which forecasts from traditional political economy models suggest high uncertainty. The model provides immediate feedback to political candidates and parties on the success of their campaign and can advise them on which issues to assign the highest priority.
The present study shows that the predictive performance of Ensemble Bayesian Model Averaging (EBMA) strongly depends on the conditions of the forecasting problem. EBMA is of limited value when uncertainty is high, a situation that is common for social science problems. In such situations, one should avoid methods that bear the risk of overfitting. Instead, one should acknowledge the uncertainty in the environment and use conservative methods that are robust when predicting new data. For combining forecasts, consider calculating simple (unweighted) averages of the component forecasts. A vast prior literature finds that simple averages yield forecasts that are often at least as accurate as those from more complex combining methods. A reanalysis and extension of a prior study on US presidential election forecasting, which had the purpose to demonstrate the usefulness of EBMA, shows that the simple average reduced the error of the combined EBMA forecasts by 25%. Simple averages produce accurate forecasts, are easy to describe, easy to understand, and easy to use. Researchers who develop new methods for combining forecasts need to compare the accuracy of their method to this widely established benchmark method. Forecasting practitioners should favor simple averages over more complex methods unless there is strong evidence in support of differential weights.
We review the performance of the PollyVote, which combined forecasts from polls, prediction markets, experts’ judgment, political economy models, and index models to forecast the two-party popular vote in the 2012 U.S. Presidential Election. Throughout the election year the PollyVote provided highly accurate forecasts, outperforming each of its component methods, as well as the forecasts from FiveThirtyEight.com. Gains in accuracy were particularly large early in the campaign, when uncertainty about the election outcome is typically high. The results confirm prior research showing that combining is one of the most effective approaches to generating accurate forecasts.
Andreas Graefe
added 2 research items
The Golden Rule of Forecasting counsels forecasters to be conservative when making forecasts. We tested the value of three of the four Golden Rule guidelines that apply to causal models: modify effect estimates to reflect uncerainty; use all important variables; and combine forecasts from diverse models. These guidelines were tested using out-of-sample forecasts from eight US presidential election forecasting models across the 15 elections from 1956 to 2012. Moderating effect sizes via equalizing regression coefficients reduced the error relative to the original model forecasts by 5%. Including all 25 variables from the eight models in a single equal-weights index model reduced error by 46%, and combining forecasts from the eight models reduced error by 36%.
Simple surveys that ask people who they expect to win are among the most accurate methods for forecasting U.S. presidential elections. The majority of respondents correctly predicted the election winner in 193 (89%) of 217 surveys conducted from 1932 to 2012. Across the last 100 days prior to the seven elections from 1988 to 2012, vote expectation surveys provided more accurate forecasts of election winners and vote shares than four established methods (vote intention polls, prediction markets, econometric models, and expert judgment). Gains in accuracy were particularly large compared to polls. On average, the error of expectation-based vote-share forecasts was 51% lower than the error of polls published the same day. Compared to prediction markets, vote expectation forecasts reduced the error on average by 6%. Vote expectation surveys are inexpensive, easy to conduct, and the results are easy to understand. They provide accurate and stable forecasts and thus make it difficult to frame elections as horse races. Vote expectation surveys should be more strongly utilized in the coverage of election campaigns.
Andreas Graefe
added 2 research items
In averaging forecasts within and across four component methods (i.e., polls, prediction markets, expert judgment, and quantitative models), the combined PollyVote provided highly accurate predictions for the US presidential elections from 1992 to 2012. This research note shows that the PollyVote would have also outperformed vote expectation surveys, which prior research identified as the most accurate individual forecasting method during that time period. In addition, adding vote expectations to the PollyVote would have further increased the accuracy of the combined forecast. Across the last 90 days prior to the six elections, a five-component PollyVote (i.e., including vote expectations) would have yielded a mean absolute error of 1.08 percentage points, which is 7% lower than the corresponding error of the original four-component PollyVote. This study thus provides empirical evidence in support of two major findings from forecasting research. First, combining forecasts provides highly accurate predictions, which are difficult to beat by even the most accurate individual forecasting method available. Second, the accuracy of a combined forecast can be improved by adding component forecasts that rely on a different method and different data than the forecasts already included in the combination.
Andreas Graefe
added 2 research items
In averaging forecasts within and across four component methods (i.e., polls, prediction markets, expert judgment, and quantitative models), the combined PollyVote provided highly accurate predictions for the US presidential elections from 1992 to 2012. This research note shows that the PollyVote would have also outperformed vote expectation surveys, which prior research identified as the most accurate individual forecasting method during that time period. In addition, adding vote expectations to the PollyVote would have further increased the accuracy of the combined forecast. Across the last 90 days prior to the six elections, a five-component PollyVote (i.e., including vote expectations) would have yielded a mean absolute error of 1.08 percentage points, which is 7% lower than the corresponding error of the original four-component PollyVote. This study thus provides empirical evidence in support of two major findings from forecasting research. First, combining forecasts provides highly accurate predictions, which are difficult to beat by even the most accurate individual forecasting method available. Second, the accuracy of a combined forecast can be improved by adding component forecasts that rely on a different method and different data than the forecasts already included in the combination.
We compare the accuracy of simple unweighted averages and Ensemble Bayesian Model Averaging (EBMA) to combining forecasts in the social sciences. A review of prior studies from the domain of economic forecasting finds that the simple average was more accurate than EBMA in four out of five studies. On average, the error of EBMA was 5% higher than the error of the simple average. A reanalysis and extension of a published study provides further evidence for US presidential election forecasting. The error of EBMA was 33% higher than the corresponding error of the simple average. Simple averages are easy to describe, easy to understand and thus easy to use. In addition, simple averages provide accurate forecasts in many settings. Researchers who develop new approaches to combining forecasts need to compare the accuracy of their method to this widely established benchmark. Forecasting practitioners should favor simple averages over more complex methods unless there is strong evidence in support of differential weights.
Andreas Graefe
added 2 research items
The usual procedure for developing linear models to predict any kind of target variable is to identify a subset of most important predictors and to estimate weights that provide the best possible solution for a given sample. The resulting “optimally” weighted linear composite is then used when predicting new data. This approach is useful in situations with large and reliable datasets and few predictor variables. However, a large body of analytical and empirical evidence since the 1970s shows that such optimal variable weights are of little, if any, value in situations with small and noisy datasets and a large number of predictor variables. In such situations, which are common for social science problems, including all relevant variables is more important than their weighting. These findings have yet to impact many fields. This study uses data from nine U.S. election-forecasting models whose vote-share forecasts are regularly published in academic journals to demonstrate the value of (a) weighting all predictors equally and (b) including all relevant variables in the model. Across the ten elections from 1976 to 2012, equally weighted predictors yielded a lower forecast error than regression weights for six of the nine models. On average, the error of the equal-weights models was 5% lower than the error of the original regression models. An equal-weights model that uses all 27 variables that are included in the nine models missed the final vote-share results of the ten elections on average by only 1.3 percentage points. This error is 48% lower than the error of the typical, and 29% lower than the error of the most accurate, regression model.
With the objective of improving the accuracy of election forecasts, we examined three evidence-based forecasting guidelines that are relevant to forecasting with causal models. The guidelines suggest: (1) modifying estimates of the strength of variable effects to account for uncertainty, (2) combining forecasts from diverse models, and (3) utilizing all variables that are important. We applied the guidelines to eight established U.S. presidential election-forecasting models and tested the effects on forecast accuracy by calculating cross-validated out-of-sample forecasts. Modifying effect sizes reduced error compared to the errors of the original model forecasts by about 5% on average. Combining forecasts from the eight models reduced error by 36%. And including all 25 variables from the eight models in a single equal-weights index model reduced error by 46%.
Andreas Graefe
added a research item
The Issues and Leaders model predicts the national popular two-party vote in US presidential elections from people’s perceptions of the candidates’ issue-handling competence and leadership qualities. In previous elections from 1972 to 2012, the model’s Election Eve forecasts missed the actual vote shares by, on average, little more than one percentage point and thus reduced the error of the Gallup pre-election poll by 30%. This research note presents the model’s forecast prior to the 2016 election, when most polls show that voters view Republican candidate Donald Trump as the stronger leader but prefer the Democrat’s nominee Hillary Clinton when it comes to dealing with the issues. A month prior to Election Day, the model predicts that Clinton will win by four points, gaining 52.0% of the two-party vote.
Andreas Graefe
added a research item
The present study reviews the accuracy of four methods for forecasting the 2013 German election: polls, prediction markets, expert judgment, and quantitative models. On average, across the two months prior to the election, polls were most accurate, with a mean absolute error of 1.4 percentage points, followed by quantitative models (1.6), expert judgment (2.1), and prediction markets (2.3). In addition, the study provides new evidence for the benefits of combining forecasts. Averaging all available forecasts within and across the four methods provided more accurate predictions than the typical component forecast. The error reductions achieved through combining forecasts ranged from 5% (compared to polls) to 41% (compared to prediction markets). The results conform to prior research on US presidential elections, which showed that combining is one of the most effective methods to generating accurate election forecasts.
Andreas Graefe
added a research item
The PollyVote Forecast for the 2016 American Presidential Election - Volume 49 Issue 4 - Andreas Graefe, Randall J. Jones, J. Scott Armstrong, Alfred G. Cuzán
Andreas Graefe
added an update
We are working on a new chart that allows users to compare different forecasts over time. Check it out and provide feedback: http://charts.pollyvote.com/testlinechartwithalloptions
 
Andreas Graefe
added an update
 
Andreas Graefe
added an update
Andreas Graefe gave three presentations on the PollyVote in September:
  1. Annual Meeting of the American Political Science Association, Philadelphia, September 2
  2. Tow Center for Digital Journalism, Columbia University, New York City, September 13
  3. Annual Meeting of the Online News Association (ONA), Denver, September 16
 
Andreas Graefe
added an update
The PollyVote team will be present with three talks at this year's Annual Meeting of the American Political Science Association (APSA) in Philadelphia from September 1 to 4. See the following links for the scheduled sessions:
 
Alfred G. Cuzán
added an update
The PollyVote is updated continuously at PollyVote.com
 
Andreas Graefe
added a project goal
The PollyVote project uses the high-profile application of predicting U.S. presidential election outcomes to demonstrate advances in forecasting research. When the PollyVote was first launched in 2004, the original goal was to demonstrate the benefits of combining forecasts. Since then, the PollyVote team has expanded its focus by analyzing the value of new forecasting methods such as expectation surveys and index models.
With the 2016 election, the PollyVote uses natural language generation technology to produce automated news based on the forecasting data. The goal of this project is to study how users perceive automated news for such a high-involvement topic that incorporates uncertainty rather than facts.