Bias in approval ratings
This post is part of the Oraclum White Paper 09/2018, published on our website. Oraclum White Papers are analytical reports on Oraclum’s predictions and prediction methods. They are designed to be informative, provide an in-depth statistical analysis of a given issue, call for a proposal to action, or introduce a unique solution based on one of Oraclum’s products.
Trump’s approval ratings have the same problem as his pre-election polls – they are biased!
Since the beginning of his presidency Donald Trump has been experiencing the lowest recorded presidential approval ratings in US history. According to FiveThirtyEight the aggregate numbers for March, after a year and two months in office, are at around 41-42%, which is lower than any US president since WWII. Usually presidential approval ratings are to some extent correlated with the probability of re-election for the second term, but even more importantly less popular Presidents tend to drag their parties down in midterm elections (see Figure 1). Notice, however, that no matter how popular they were; only two presidents, Bush in his first term, and Clinton in his second, have helped their parties gain House seats in the midterm elections. All the others have seen their parties lose seats, but the size of the loss was inversely proportional to the president’s popularity immediately prior to the midterm election. In other words, a more popular president helped his party lose less House seats.

Figure 1: Presidential approval ratings and their parties’ House midterm results
Although at first glance this might sound concerning or reassuring (depending on whether you’re a Republican or a Democrat), bear in mind that Trump has had a stern record in defying both polls and historical political trends. He remains a very divisive president, just as much he was a divisive presidential candidate; however he still managed to carry the national victory, while exercising a very strong coattail effect with only 46.1% of the final vote share. His approval ratings therefore need to be taken with a pinch of salt, and should certainly not be examined at face value. The reason is similar as to why his polling numbers were wrong in 2016 – an increasing number of non-respondents.
Non-response bias in polls
Pollsters in many countries have been subject to a lot of bad press over the past few years. One of the main reasons was their failure to accurately grasp voter preferences in election times. The most prominent ones were the big misses in three consecutive UK elections, the 2015 and 2017 generals and the 2016 Brexit referendum, and of course the 2016 Trump victory in the US.
One reason for this is the rapidly decreasing number of response rates for traditional telephone polls. A response rate is the number of people who agree to give information in a survey divided by the total number of people called. According to Pew Research Center, a prominent pollster, and Harvard Business Review response rates have declined from 36% in 1997 to as little as 9% in 2016. This means that in 1997 in order to get say 900 people in a survey you had to call about 2500 people. In 2016 in order to get the same sample size, you needed to call 10,000 people. Random selection is crucial here (because the sample mean in random samples is very close to the population mean) and pollsters spend a lot of money and a lot of effort to achieve randomness even among those 9% who did respond. But can this be truly random is an entirely different question. Such low response rates are almost certainly making the polls subject to non-response bias. This type of bias significantly reduces the accuracy of any telephone poll, making it more likely to favor one particular candidate because they only capture the opinion of particular groups, and not the entire population. Online polls on the other hand suffer from self-selection problems and are by definition non-random and hence biased towards particular voter groups (younger, urban populations, usually also better educated).
Following the above example, assume that after calling about 10,000 people and only getting 900 (correctly stratified and supposedly randomized) respondents, the results were the following: 450 for Clinton, 400 for Trump, and 50 undecided (assuming, for simplicity, no other candidates). This would yield the poll saying that Clinton is at 50%, Trump at 44.4%, and that 5.5% are undecided, and it would conclude that because the sampling was random, the average of responses for each candidate in the sample is likely to be very close to the average in the population.
But it’s not. The low response rate suggests that some of those who do intend to vote simply did not want to express their preferences. Among all those 9000 non-respondents the majority are surely people who dislike politics and hence will not even bother to vote (turnout in the US is usually between 50 and 60%, meaning that almost half of the eligible voters simply don’t care about politics). However, among the rest there are certainly people who will in fact vote, some of which will probably support Trump, but are unwilling to say this to the interviewee directly. Why people do this is still unknown. There are two plausible expiations of why a potential Trump supporter would refuse to give an answer to a poll: 1) they are embarrassed or afraid to say they support Trump to a live phone interviewer, or 2) they distrust the pollsters and view them in the same context as the “fake news” media. There could be a number of other reasons, but one thing is sure – voters have started to avoid expressing their opinions in surveys. And this is posing a serious problem to the industry, and hence to anyone who depends on information from survey research.
Before offering potential remedies, how can we be so sure that the non-respondents in polls are Shy Trump voters? Why shouldn’t they potentially be Shy Hillary voters?
Shy Trump voters
There are several reasons suggesting that non-respondents in polls are in fact Trump rather than Hillary voters.
The first one is the recent finding that Trump’s approval ratings tend to be higher in Interactive Voice Response (IVR) or online polls as opposed to telephone polls, i.e. when polls are not being done by live human interviewers, but when people are talking to machines, or are just filling out an online poll. The difference is as large as 10 percentage points (48.7% Trump support in IVR surveys versus 38.2% in Internet surveys), which is a huge difference, much larger than the usual margin of error.
Furthermore, some pollsters survey the entire adult US population, while others focus only on those who are likely to vote (more on this below). In terms of election polls this can make a big difference. Some people might dislike a candidate so much that they will give them a negative rating, but they have no intention of voting at all (perhaps they are fed up with politics). On the other hand anyone rating a political candidate highly will surely vote for them. This implies that the responses from the entire adult population will be less accurate than responses from likely voters. This won’t make much of a difference in general market research surveys for products or services, but it will make a difference in political polling.
Finally and most importantly, our own polling during the 2016 election uncovered a systematic anti-Trump bias within the 30 states for which we ran our BASON survey.
Figure 2 compares the success of our method (x-axis) with the success of the polling average (y-axis) for the difference between the predicted and actual vote share for Donald Trump. For the polling average any dots beyond the horizontal line overestimate Trump, while any dots under the horizontal line underestimate him. For our model the overestimation is to the right of the vertical line, and the underestimation is to the left of it.
It is clear that our model under and overestimates Trump to a relatively equal extent for all states, being most precise in the most important swing states (PA, FL, NC, VA, CO, etc.). On the other hand the polls consistently underestimate Trump in almost every state. The only outlier where they overestimated Trump by almost 6%, was – DC. This implies that the polls systematically and significantly underestimated Donald Trump.

Figure 2: Oraclum’s BASON Survey vs. polling average for Trump
Looking at the same numbers for Hillary Clinton we can see that the polls were relatively good in estimating her chances. For most states they fall within a 2% margin of error, where for about 10 states the polling average was spot on. Our method once again over and underestimated Clinton to an equal extent, being the most precise where it mattered the most.

Figure 3: Oraclum’s BASON Survey vs. polling average for Clinton
Taking all this into account, the key to understating the underestimation of Trump by the pollsters was in the undecided voters. Therefore the hypothesis of a ‘Shy Trump’ voter could be true – many Trump voters simply did not want to identify themselves as such in the polls. Or they really were undecided until the very last minute, making the final decision in the polling both itself.
Finally, let’s examine this systematic bias a bit further by comparing the calibration of the BASON Survey versus the polling average (calibration is the difference between prediction and actual results). The following graph shows the difference between predictions (y-axis) and the actual results (x-axis) for our method (blue dots) and the polling average (orange dots). A good prediction should be close to having a slope of 1, which is exactly what our method proved to be (a slope of 1.1). The polling averages on the other hand experienced a flatter slope of 0.77 which confirms a systematic underestimation of Trump even in states which Clinton easily won.

Figure 4: Calibration of the BASON Survey vs. polling average
So how is Trump doing right now?
If we look at things on a state-by-state level, Gallup has this data for the entire past year. They show that Trump is unpopular countrywide and that he is still underperforming his electoral result in almost all states. In the red states his net approval is still positive, but in the swing states, including all which he had won in 2016 his current approval is worse than his electoral result (see Table 1, column “Diff from 2016”).
However there are a few important caveats here. First, the data reports averages for the entire last year, from January 20th to December 30th 2017, so it does not account for the recent upward change in trend. Second, for the whole country on average, using a year-long sample of over 170,000 people, the approval rating was 38%, and disapproval was 56% (this is accounting for different state size). This is a bit lower than what the current polling average for March gives him, which is between 41 and 42%. If we account for this evenly across all states it suggests that Trump is doing slightly better in the swing states that he won, however he is still underperforming in almost all of them by at least 3 to 4 percentage points (instead of 6 to 8 p.p.).
The third caveat is concerning Gallup’s methodology. Gallup is one of those pollsters that uses telephone interviews and calls a representative sample of all over-18 Americans to see if they approve or disapprove of the President. The first issue here, as emphasized previously, is that not all of these people eventually vote. When looking at Rasmussen polls which take into account only likely voters, Trump’s approval ratings are much higher – around 46-47% in March. The second issue is that the Gallup polls are done using live telephone interviews which make them more subject to anti-Trump bias. Rasmussen uses an automated polling methodology (IVR) where respondents give their opinions to an automated machine, making them more likely to be truthful.

Table 1: Trump 2017 approval ratings, 2016 pre-election polls, and 2016 results
Finally, the fair comparison in this case would not be Trump’s election result versus his approval ratings, but rather his 2016 pre-election polls versus his 2017 approval ratings (the final column in Table 1). According to these his performance in 2017 was not too far off from his pre-election polling. In fact, for a few key swing states, like Ohio or Pennsylvania, or the surprises he pulled in Michigan and Wisconsin, he is very much in the same position he was in 2016 before the election. He is underperforming in Florida, North Carolina, Georgia, and Iowa (of the states that he’d won), however when taking into account that his nation-wide trend has improved and is now at 41-42% instead of 38% that Gallup reported for 2017, Trump is very likely not doing any worse than he was in 2016.
Bearing in mind that the current polls are still underestimating Trump, the current face values of his approval ratings will not be too informative of the actual state of his popular support. His approval rating is certainly low, but so was his general election vote share, yet he still managed to scrape a victory in almost every key swing state.
What does this suggest about the midterm House and Senate races? The recent election results do offer a glimpse of hope to the Democrats as they imply that Trump’s coattail effect has waned for his fellow Republicans down the ballot. Given that he himself is not running and that opinion on politicians is at an all time low in the US, there shouldn’t be any coattail effect this time, and the House races will probably repeat the historical trend when low approval ratings of a President suggest a House net loss for his party. However, when designing pre-election prediction models on specific House and Senate races, the Democrats would be wise not to count too much on the current Trump approval ratings. The suggestion would be to either avoid placing a high emphasis on the approval ratings, as they tend to overestimate the chances of the Democratic party’s candidates, or use an alternative method that can be much better in correctly and precisely estimating the Trump approval rating.
Oraclum’s BASON Survey – the only poll to successfully solve the sampling bias problem
Oraclum’s BASON Survey is just that type of method. It is proven to yield much more accurate estimates of election outcomes when taking into account that people distrust polls and have a tendency to be less truthful. The BASON Survey asks people what they think who will win, and how they feel about who other people think will win.
It is based on a wisdom of crowds (WoC) approach to polling, accompanied by a network analysis of survey participants and their friendship networks in order to eliminate groupthink bias (see the Box for further explanation). By doing so it is able to generate much more accurate results than regular polls which struggle to find the right sampling methodology in times when response rates are at their historical lows.
By asking people to express their opinions on what others in their neighborhoods or states would do, we avoid the issue of respondents not truthfully reporting their opinions. After all, the information we seek to find out is making a prediction who will win, not who you will vote for, and it’s about thinking what other people would do. The BASON Survey leaves people well within their own comfort zones and gives them a chance to think and without pressure express their opinion on a subject.
The way we ask the questions also gives people further incentive to think about the questions and self-correct their own answers. This delayed judgment has been proven by behavioral scientists to improve accuracy of people’s own forecasts. By asking our questions this way we deliberately sacrifice large samples for accuracy. It is important to stress out that the BASON Survey does not use any private information of its respondents, and has no way of knowing who they are. We base our predictions purely on what people tell us.
The BASON Survey has been tested on a number of elections and market research problems, and has yielded incredible accuracy every time. It is the single best prediction tool available on the market, guaranteed to correctly identify what voters (or customers) want and why, without invading anyone’s privacy.
Read more about the BASON Survey here, or in the White Paper.
Recent Comments