Israel’s elections are over, and there are no more polls for the data junkies to pour over and discuss. Reality has intruded on predictions and voter models.
It’s almost an axiom that Israel’s voter-intention polls are inaccurate and unreliable. Despite this, the media commissions them, quotes them and uses them as the basis for telling the story of the election.
It’s only fair, then, that we compare the actual results to the polls and ask which parties did as predicted, which defied expectations, which pollsters got it right and which were, ultimately, wrong.
- Mean: 33.66
- Median: 33
- High: 37 (Maagar Mochot)
- Low: 32 (Dialogue, Geocarteography)
- Actual seats: 31
Despite some of the more upbeat briefings from Likud Beiteinu, the party’s poll ratings never really recovered from the merger of the two lists and they slowly bled out votes over the course of the election campaign. They had seemed to hit rock bottom at the beginning of January. Apart from Maagar Mochot (which had a high Likud share in all its polls), the polling companies were all pointing to 32-34 seats for Likud Beiteinu. The final score of 31 was pretty close to the overall total.
- Mean: 16.66
- Median: 17
- High: 18 (Teleseker)
- Low: 15 (Maagar Mochot)
- Actual seats: 15
Labour also had a lacklustre election campaign and was squeezed by other parties of the centre and left. Its poll ratings had dipped in the few weeks leading to the election and the actual result was within the margin of error of most of the pollsters, even if they were generally a little high. Unlike its Likud number, Maagar Mochot got the Labour seats exactly right.
- Mean: 10.9
- Median: 11
- High: 12 (Maagar Machot, Dialog
- Low: 10 (Geocateography, Panels, Teleseker)
- Actual seats: 11
Everyone got this one basically right, with five companies correctly predicting Shas would get 11 seats in this Knesset, and all of them in the 10-12 range.
- Mean: 14.4
- Median: 14
- High: 17.5 (Geocarteography)
- Low: 12 (Dahaf)
- Actual seats: 12
One of the ‘surprises’ of election night was HaBayit HaYehudi’s relatively poor performance. Some polls had suggested that the party would be the second-largest, but in the end they managed to claim only 12 seats – a massive increase on the party’s previous position, but a disappointment in light of the increased expectations.
Polls don’t predict what will happen at the actual election; they are only supposed to give you the position on the day the poll was conducted. There was some evidence that HaBayit Hayehudi was beginning to lose popularity in the last few polling days before the election. The final tally of 12 seats was only 2 lower than the median polling result of 14 seats, so even those that got it wrong largely didn’t get it *very* wrong A special mention here goes to Dahaf who correctly had 12 seats in their final pre-election poll, and Geocarteography, who were much further out at 17.5
- Mean: 11.1
- Median: 11
- High: 12 (New Wave, Dialog, Smith, Midgam)
- Low: 8 (Maagar Mochot)
- Actual seats: 19
This is where it gets interesting. Everyone got the Yesh Atid seats wrong – not just a little bit wrong but extremely wrong. Too wrong to be random sampling error.
How did all the companies screw this up?
Perhaps there was a serous sampling bias which was not accounted for in anyone’s weighting models. One commonly-cited claim is that Israeli telephone polls don’t sample mobile phones, which accounts for an under-representation of young people. This probably isn’t actually the issue; firstly, I have been told that it isn’t actually true – I know Smith includes mobile phones, for example.
Secondly, experience in the UK shows that adding mobiles to a polling sample doesn’t do much that can’t already be fixed by weighting anyway. If young people are under-sampled then their responses would be weighted up. This means that if there is a persistent low number of young people in the sample, then we would expect increased sampling error in the polls. This sampling error would result in less-accurate and more volatile polls, but wouldn’t introduce a systematic bias. The errors should cancel out over enough polls. The only way the mobile-phone thing becomes a real problem is if it introduces a sampling bias that can’t be weighted for by other factors (eg income, residential status, education). Although I don’t think that this was a big factor, it should be investigated when the polling companies look into their methodologies.
Another reason given for why the polls missed the rise of Lapid is that many voters only made up their minds shortly before the election, and that these “don’t know” voters broke massively for Yesh Atid.
This might be true – although it’s worth noting that the totals for the other parties were all broadly in the right places, so almost ALL the undecided voters went to Lapid. Still, it doesn’t account for how the polls misled the public into thinking that Yesh Atid was heading for a much lower total than they got.
There are ways of dealing with ‘don’t know’ responses in polls. You can ask people which party they’re leaning towards, and then allocate an amount of them (say 65%) to that party. If they are deciding between two parties, you could split them between those two parties. Or, in Israel’s proportional system, you could include a number of undecided up-for-grabs seats in the poll’s topline figures.
The Times of Israel’s poll showed, unsurprisingly, that undecided voters tend to be undecided between a small range of parties. If polling companies are totally excluding undecideds from their models, it is no surprise that they got Yesh Atid’s voteshare so badly wrong. If they did try to include undecideds in their models, then their methods for doing this are broken. Of course, as I’ve said before, these methods are secret.
Think how the electoral narrative would have changed if Yesh Atid had been correctly pegged at 18-20 seats in the closing weeks of the campaign. Likud-HaBayit Hayehudi waverers might have been drawn back to Netanyahu, while a Lapid bandwagon might have drawn more centre-left votes away from Livni and Labour. The final shape of the Knesset might have been very different.
This stuff matters. We shouldn’t let the polling companies off the hook just because they were largely correct for the other parties. We deserve an explanation of what went wrong.
- Mean: 5.9
- Median: 6
- High: 6 (Everyone except Dialog)
- Low: 5 (Dialog)
- Actual seats: 7
Almost all polls had UTJ at 6 seats throughout the whole campaign, with very little variance. The actual result of 7 was close, but given the consistency of the polls predicting 6 seats, this was actually not a great call and probably reflects a mistake in the demographic weighting models used by the companies. Alternatively, UTJ might have been boosted by the withdrawal of a minor Haredi party a few days before the election.
HaTnua – Livni party
- Mean: 7
- Median: 7
- High: 9 (Dahaf, Midgam)
- Low: 5 (Maagar Mochot)
- Actual seats: 6
Livni’s party was always a quixotic endeavor. It came into being because Livni failed to reach a deal with Labour or Yesh Atid and attracted some household names onto its list. Its poll numbers were all over the place, inside a range of about 5-11 seats over the campaign. From January, polls showed Livi’s party slowly losing votes to Lapid and others. The downward trend means that the 6 seats they actually achieved was pretty accurately called by the polling companies.
- Mean: 5.8
- Median: 6
- High: 7 (Maagar Mochot)
- Low: 5 (Teleseker, Midgam, New Wave)
- Actual seats: 6
Meretz had a slow start, with early polls showing the party getting only 4 seats. They picked up as the election neared and the final polls were basically correct.
Not much to say here. Given the small sample sizes, most polling companies did well to correctly predict that Kadima would pass the threshold, that Otzma L’Yisrael would get enough votes for 2 seats but not pass threshold, and most got the seats for the Arab parties broadly right too.
To work out which pollster was the most (and least) accurate, I added together all the differences between the predicted totals for each party and the actual totals. For example, Dahaf had a Yesh Atid total of 11 seats but they actually got 19, so the difference – 8 – gets added to the total points for Dahaf.
I then made an adjusted points count which only takes into account total differences of two seats or more; if a company was within one seat of the true figure, that didn’t count as a point for the adjusted total. As it happened, the best pollster and worst pollster was the same with each method.
Using this method, the best final poll of the election was conducted by Smith in the Jerusalem Post. A fairly good pollster in 2009, Smith’s poll had only 18 points on the unadjusted method and 15 points in the adjusted method – fully 7 of these were from the Yesh Atid surge.
Other honourable mentions go to New Wave, which scored 20 unadjusted points and 16 adjusted points, and Panels, which scored unadjusted 22 points and 16 adjusted points
Maagar Mochot was one of the most accurate pollsters of the 2009 election but this time it consistantly had the highest prediction for Likud Beiteinu and one of the lowest for Yesh Atid. Overall it was 26 points away from the actual result, or 21 adjusted points making it the least accurate of the major pollsters in 2013