Comparing the 2020 US election polls & predictions

What the others are saying

UPDATE: October 22nd 2020

In the 2016 election the accuracy of our prediction was all the more impressive given the failure of every single benchmark we compared ourselves to.

Polling aggregators

The benchmarks are separated into several categories. The first includes sites that use a particular polling aggregation mechanism. Namely, Nate Silver’s FiveThirtyEight, the Princeton Election ConsortiumReal Clear Politics average of polls, PollyVote, the Upshot, and The Economist. For each site we track the probability of winning for each candidate (if given), their final electoral vote projection, and their projected vote share. The specific methodology for each of these can be found on their respective websites, with each of them employing a commendable effort in the election prediction game (except RCP which is just a simple average of polls).

Source: The Upshot

Models

There are two kinds of election prediction models we look at. The first group are political-analyst based models done by reputable non-partisan websites analyzing US elections: the Cook Political Report and Sabato’s Crystal Ball. Each is based on a coherent and sensible political analysis of elections. Here we only report the electoral college predictions with the tossup seats as given in their report. These models do not give out probabilities or vote share predictions.

Prediction markets & betting odds

Next are prediction markets. Prediction markets were historically shown to be even better than regular polls in predicting the outcome (except in the previous election where they were giving Clinton on average 75% probability of winning). Their success is often attributed to the fact that they use real money so that people actually “put their money where their mouth is”, meaning they are more likely to make better predictions.

Superforcasters

Finally we compare our method against the Superforcaster crowd of the Good Judgement Project. Superforecasters are a colloquial term for participants in Phillip Tetlock’s Good Judgement Project (GJP). The GJP was a part of a wider forecasting tournament organized by the US government agency IARPA following the intelligence community fiasco regarding the WMDs in Iraq. The government wanted to find whether or not there exists a more formidable way of making predictions which would improve decision-making, particularly in foreign policy. The GJP crowd (all volunteers, regular people, seldom experts) significantly outperformed everyone else several years in a row. Hence the title — superforecasters (there’s a number of other interesting facts about them — read more here, or buy the book). However superforecatsers are only a subset of more than 5000 forecasters who participate in the GJP. Given that we cannot really calculate and average out the performance of the top predictors within that crowd, we have to take the collective consensus forecast of all the forecasters in the GJP.

Share This