Our final prediction is a **close victory for Remain**. According to our BAFS method **Remain **is expected to receive a vote share of **50.5%**, giving it a **52.3%** chance of winning.

Our prediction produces a probability distribution shown on the graph above (see explanation on the right), presenting a range of likely scenarios for the given vote shares. Over the past week we have consistently been providing estimates of the final vote share and the likelihood of each outcome. Daily changes and close results simply reflect the high levels of uncertainty and ambiguity following the EU referendum. However our prediction survey (the BAFS) has noticed a slight change of trend in favour of Remain in the past two days.

This is why our final prediction gives a slight edge towards Remain, and predicts a vote share of 50.5% for Remain, and 49.5% for Leave (the **graph below** represents the vote share of Leave, denoted as ‘votes for Brexit’ – a higher expected vote share for Brexit decreases the probability of Remain as the final outcome). The probabilities for both outcomes are also quite close, standing at 52.3% for Remain, and 47.7% for Leave. This means that 52% of the time when polling is so close and when the people themselves expect and predict a very close result, the Remain outcome would win. 48% of the time it wouldn’t.

*Vote share for Leave (votes for Brexit). The grey area describes the average error. As the sample size grew, the average error decreased.*

*A timeline of probabilities for both outcomes since the start of our survey.*

**Why such low probabilities? **

Due to a relatively high margin of error (± 5.3%). However given that this is **not** a standard survey with a representative sample, **the error term does not mean much** in this case (there is a whole debate about the controversy behind the margin of error – read it here).

Nevertheless, why is the error so high? Primarily because of very high levels of uncertainty among the actual polls, as well as among the predictions our respondents gave us. Also our sample size was relatively low (more on that below). If the error was around 1%, then the probabilities would have been much higher in favour of Remain (above 70%). This is closer to what the prediction markets and the superforecasters are saying.

**But this means the prediction is as good as a coin toss?**

Indeed. As it stands, the race is nothing short of a coin toss.

The problem in predicting such close outcomes is the measure of relative success of the prediction method. Usually being correct within a 3% margin is considered to be quite precise. In this case nothing short of a 1% margin will be permissible, which is essentially ridiculous and extremely difficult to guessestimate.

Having said that, we do hope our prediction method will be correct within its margin of error, but more importantly that it has correctly predicted the final outcome.

**How does the method work?**

The BAFS method (Bayesian Adjusted Facebook Survey) is a prediction method based on its own unique poll where we ask the people not only to express their preferences, but also **who they think will win** and **how they feel about who other people think will win**. This makes it different than regular polls which are simply an expression of voter preferences at a given point in time.

The obvious difference between standard polling and our method was noticeable during our initial predictions where we had a very small sample (around a 100 respondents) which was obviously biased towards one option (it gave Remain a 66% vote share), but we were still able to produce very reliable and realistic forecasts (see the graph below, the first results pointed to a slight victory for Remain, even with very high margins of error – initially over 10%). The later variations in our predictions were small even as the sample size increased threefold.

We follow here the logic of Murr’s (2011, 2015, 2016) citizen forecaster models where even a small sample within each constituency (21 average respondents per constituency for group forecasts) is enough to provide viable estimates of the final outcome across the constituencies.

The BAFS method, similar to the citizen forecaster model, is therefore relatively **robust to sample size**, as well as the self-selection problem (all of our respondents voluntarily participated in the survey). Both of these issues undermine the quality of standard polling, but in this case it was shown to have little or no effect. The BAFS method, utilizing the **wisdom of crowds approach **(group level forecasting), benefited from a diverse, decentralized, and independent group of respondents (see Surowiecki, 2004) which gave us very realistic estimates of the final outcome. This implies that our prediction is likely to be quite close to the actual outcome on 23^{rd} June.

**How do we compare to other methods? **

As we announced last month, in addition to our central prediction method we will use a **series of benchmarks** for comparison with our BAFS method. In the following tables we have summarized the relevant methods. For more about each method please read here. (Note: We have decided to introduce two new methods, from Number Cruncher Politics and from Elections Etc., both of which have proven track records in previous elections).

* For the adjusted polling average, the regular polling average, and for the forecasting polls we have factored in the undecided voters as well.

As it stands, we tend to be quite close to the predictions for the vote share (polls are slightly in favour of Leave, while other prediction methods are slightly in favour of Remain), but we tend to be a bit far from the probability estimates (the reasons for which are described above – if our error was lower, our probabilities would have also been around 70:30 in favour of Remain).

Finally, here is how the map of the UK is supposed to look like if our predictions are correct:

And here is the table:

Comments are closed.