We called it! How we predicted a Trump victory

As our regular readers know by now our last week’s prediction on Donald Trump winning the US election was spot on!

We correctly predicted all the key swing states (PA, FL, NC, OH), and even that Hillary could get more votes but lose the electoral college vote. Here are our results as I presented them in a Facebook post on the eve of the election:

The story got covered first by the academic sources. University of Oxford published it as part of their main election coverage, as did the LSE EUROPP blog. Our idea of a scientific-based prediction survey was also covered in the New Scientist in the week before the election.

More news coverage soon to come!

Details of our prediction 

The results nevertheless came as an absolute shock to many, but it was the pollsters that took the biggest hit. All the major poll-based forecasts, a lot of models, the prediction markets, even the superforecaster crowd all got it wrong (we have summarized their predictions here). They estimated high probabilities for a Clinton victory, even though some were more careful than others in claiming that the race will be very tight.

Our prediction survey, on the other hand, was spot on! We predicted a Trump victory, and we called all the major swing states in his favour: Pennsylvania (which no single pollster gave to him), Florida, North Carolina, and Ohio. We gave Virginia, Nevada, Colorado, and New Mexico to Clinton, along with the usual Red states and Blue states to each. We only missed three – New Hampshire, Michigan, and Wisconsin (although for Wisconsin we didn’t have enough survey respondents to make our own prediction so we had to use the average of polls instead). Therefore the only misses of our method were actually Michigan, where it gave Clinton a 0.5 point lead, and New Hampshire where it gave Trump a 1 point lead. Every other state, although close, we called right. For example in Florida we estimated 49.9% to Trump vs. 47.3% to Clinton. In the end it was 49.1 to 47.7. In Pennsylvania we have 48.2% to Trump vs. 46.7 for Clinton (it was 48.8. to 47.6. in the end). In North Carolina our method said 51% to Trump vs. 43.5% for Clinton (Clinton got a bit more, 46.7, but Trump was spot on at 50.5%). Our model even gave Clinton a higher chance to win the overall vote share than the electoral vote, which also proved to be correct. Overall for each state, on average, we were right within a single percentage point margin. Read the full prediction here.

It was a big risk to ‘swim against the current’ with our prediction, particularly in the US where the major predictors and pollsters were always so good at making correct forecasts. But we were convinced that the method was correct even though it offered, at first glance, very surprising results.

Read more about the method here.
The graphics
Here is, once again, our final map:

For the key swing states:

And here are the actual results (courtesy of 270towin.com):


Here is, btw, what the other poll-based forecasters were saying (more on that here):

In addition to these other forecasters we were tracking were even more confident in Hillary taking all the key states. As you can see no one gave PA to Trump, some were more careful about FL and NC, although they too were mostly expected to go to Hillary. However the reason I think PA was key in this election is because everyone thought Hillary’s victory was certain there. Not to mention the shocks of losing MI and WI as well. If Hillary got these three states, even by losing the toss-up FL and NC, she would have won (278 EV). This is why, we believe, all the forecasters were so certain (some more than others) that Hillary will pull it off. Holding on to what was supposed to be her strongholds (all three states were last Red under Reagan in 1984) was to be enough for victory. But Trump’s dominating performance in the Rust Belt shattered the firewall strategy of the Clinton team and won him the election. 



  1. Post-election analysis: have the polls underestimated Trump? - Oraclum blog - […] the previous post we discussed the precision of our model’s result, particularly compared to predictions made by […]
Share This