Featured Post

In defence of polls and pollsters

Especially in Israel, samples of the electorate do better at predicting the big picture than at delivering the 'winner'
Sheets of newly printed ballots at a printing house in Jerusalem. (Miriam Alster/ Flash90)
Sheets of newly printed ballots at a printing house in Jerusalem. (Miriam Alster/ Flash90)

It often happens, that the pre-election polls of voting intentions do not get it “right.” And it happens that the exit polls, i.e. the polls of voters exiting polling stations, do not get it “right.” They tell you, or, to be precise, are understood as telling you that X will win, and Y wins instead. That happened in Britain: the outcome of the 2015 general election and the result of the Brexit referendum came as a surprise to some commentators. The Israeli election in 2019 is no different. Dr Mina Tzemach’s exit polls gave the Blue and White party a numerical advantage over the Likud. Other polls showed the two major contestants being neck-and-neck.

The post-election debates questioning the usefulness of polls already started, as one would expect. Rabbi Moshe-Mordechai van Zuiden characterized the polls and pollsters as “absurdly unprofessional” – a grave accusation. I have not spoken to Dr Mina Tzemach in person, and I have not spoken to Rabbi Moshe-Mordechai van Zuiden in person. I am grateful to both. Dr Mina Tzemach should be thanked, at the very least, for raising the awareness of polls for understanding the social realities. At the same time, Rabbi Moshe-Mordechai van Zuiden should be thanked for articulating the position of a “dissatisfied customer.” There will be no progress in the world of polls without criticism of this kind.

I will use this opportunity to clarify a thing or two about the polls, both the pre-election and the exit polls. Their main problem is that they rely on samples, which are small subsets of voters. There is no choice. If pollsters could approach and question every single voter about how they will vote (in the coming election) and how they just voted (on their exit from polling stations), nobody would have bothered with samples. But questioning everybody is impossible — there is no money or time for this. And so, the pollsters go for samples, typically of 1,000 or 2,000 people. This, I believe, is relatively well-known, but the full meaning of it is not.

Going for a sample instead of the whole is a compromise to start with. A compromise is an inherent feature of the philosophy behind sampling. The price of the compromise is accuracy. Samples come with something called the “margin of error” which means that any percentage derived from the sample could be, in reality, slightly higher or slightly lower than the actual percentage in the population of voters as a whole. What is slightly higher or lower? That depends on the size of the sample. When the sample is 1,000 people, the margin of error can approach 3%, and a 2,000-strong sample can come with the 2% margin of error. So, by way of example, if a 1,000-strong sample shows that 27% of voters support Likud, in reality it could be that the real level of support for Likud is in the range of 24%-30%. Only the full count will be able to clarify the exact figure. Many critics of the polling industry are well-aware of the margin of error, a situation owing to the recent publicity gained by the polls, but not as many are aware of how stubborn the margin of error is, and how especially “dangerous” it is, given the Israeli realities. I will now clarify why this is so.

“Stubborn” means in this case that the margin of error is difficult to reduce, let alone get rid of entirely. The margin of error on the scale of 2-3% should not be a big deal, but really, whether it is a big deal depends on the issue at hand. The election outcome is a big deal and so 2-3% margin of error will be seen, at times, as insufferable. How to improve the accuracy? By enlarging the sample, of course. Only enlarging the sample costs money and increases the time needed to generate the results. Also, and very importantly, at really large sample sizes, the margin of error does not decrease as quickly as it does at small sample sizes. Consider this: a 1,000-strong sample has a margin of error of 3%, while a 5,000-strong sample has a margin of error of 1.4%. For a 10,000-strong sample a margin of error is still about 1%, better than 3%, but samples of 10,000 are utterly impractical. Still, even the margin of error at the level of 1% can be problematic, if high accuracy is expected. In sum, the margin of error is not easy to eradicate and the reality where too parties or blocs are genuinely not that far from each other is especially demanding with respect to accuracy. The Israeli election reality is typically very demanding.

The presence of the margin of error is not a grave problem in situations where the contestants of the elections are very distant from each other. Consider the Russian 2018 presidential election.  In that election, according to the announcement of the Russian Electoral Commission, Vladimir Putin secured 76.7% of votes. Other contestants secured 0.65%-11.8% of votes, each. The gap between them and Vladimir Putin is such that the presence of the margin of error in a typical poll would be immaterial. An exit poll of 1,000 voters giving Vladimir Putin, say, 73% of votes, would come with the margin of error of up to 3%, meaning that the real proportion of votes going to Vladimir Putin is, in high probability, in the range of 70%-76%. Still, Putin’s contestants are so far behind that the margin of error is inconsequential for predicting their fate, and his.

The conclusion: typical polls with 1,000-2,000 sample sizes do well in predicting large differences between two leading parties of blocs. Small differences, such as those often observed in Israel, are not easy for them to tackle. This should lead us to adjust our expectations from polls. Rather than expecting them to deliver us the “winner,” we better see them as delivering the “big picture.” In Russia, the big picture is Putin’s great popularity with the electorate. In Israel, the big picture is nearly equal popularity of two large parties or blocs. Not much nuance needs to be captured in the former case, in the latter case – it is all about nuance, which may not reveal itself until the last vote is counted.

Do people lie in polls? It would be quite amazing if they did not. After all, the psychologists say that an average individual lies several times a day. Lying in polls is just an aspect of daily lying, arguably. Is lying likely to send polls in a particular direction? It can happen, and not because certain voters are worse than others, but because certain types of responses are, well, not as popular as others. When people lie, in polls and in general, they mostly do not do it for fun, but out of fear to be negatively judged. When they sense that their honest response will not be approved by the pollster, or surrounding friends and acquaintances, they are more likely to give a response that they consider as socially acceptable and safe. This is called a “social desirability bias.” Is it possible that support for the Likud was a little, only a little, less socially acceptable than support for the Blue and White, Likud’s major rival? Is it possible that, in Britain, support for Brexit was a little less socially acceptable than support for staying inside the European Union?

The existence of social desirability, whatever the direction, is well known to pollsters who have developed several techniques to deal with it. One such technique, for example, is convincing the respondents that the survey is littered with “lies detectors” and that the respondents’ attempts to hide the truth would be futile. Not all techniques are equally easy to implement in polls of voting intentions and exit polls. However, fundamentally, just how much does the social desirability affect polls’ results? The research tells us – not much, single percentages – yet again we must take into account the size of the predicted differences. Is the difference large, like it is in Russia? Then a small deviation from the truth is not “costly.” Is the difference small, like it is in Israel? Well, then even a small deviation from the truth matters a lot.

These challenges – the margin of error plus the social desirability effect plus (in Israel at least) the relatively small differences in attractiveness of the major election players– are formidable. Should we abandon polls altogether? Let us look at the results of 10 pre-election voting intention polls of Israeli adults conducted in March-April 2019. The average level of support for Likud across these polls is 20%, and it is 22% for the Blue and White. The average margin of error is 4%. The message is clear: with the difference on the scale of 2% and the margin of error of 4% one cannot be confident about the final outcome. The two contestants are very close to each other in terms of popularity – that much is clear as well.

And what about the Zehut, a party combining in its platform the nationalistic messianic and libertarian agendas, that promised to be a big surprise of this election. The average level of support for the Zehut across ten pre-election voting intentions polls stands at 4%. The margin of error in this case is lower than the 4% average quoted above – it is a rather technical point but for the especially low percentages, like Zehut’s, the margin of error should be adjusted downwards. Specifically, for Zehut it would be on the scale of 1.2%. Now, consider a 4% level of support in combination with the 1.2% margin of error. This outcome strongly hints at a possibility of the Zehut not getting above the electoral threshold, currently at 3.25%, when the hour of truth strikes.

It is over to you now. Would you rather have these insights in front of you in advance of the election? Or your life is better without them?

About the Author
The author is a demographer and a statistician, born in the USSR - a world that no longer exists - and educated in Israel and Britain. The author holds a PhD in Social Statistics and Demography. To date he has served in senior analytical roles in the Central Bureau of Statistics (Israel) and RAND Europe (Cambridge, UK). He is currently a Senior Research Fellow at the Institute for Jewish Policy Research (London, UK). He has published widely on Jewish , Israeli and European demography and social statistics. The author's favourite topics are demographic and social puzzles involving Jews and people that surround them-why do Jews live so long? why do Muslim Arabs in Israel have so many children? why do women-globally- live longer than men? Is there a link between the classic old-fashioned antisemitism and today's antizionism? These are just a few examples of questions that motivated some of his work and on which he has written extensively.
Related Topics
Related Posts
Comments