The Problem with Israeli Polls
In the last few days, people from the Likud campaign have reacted with horror – and people from the Blue & White campaign have reacted with glee – to a pair of polls that have shown Blue & White ahead of the Likud by one seat.
Let me repeat this for emphasis: One. Seat.
One seat is twice the margin by which Blue & White led the Likud in a poll from just two weeks ago. One seat is the amount by which the Likud lost to Kadima back in 2009 – and just ask Prime Minister Tzipi Livni how much that victory helped her. One seat is equivalent to a mere 0.8333% of the vote.
Why is everybody getting excited about one seat, when the polls routinely point out that they have a margin of error of 4% or more?
The excitement over this one seat reflects the biggest problems with Israeli polling: an utter lack of transparency, and massive amounts of herding towards both each other and the conventional wisdom.
Transparency
Israel has no equivalent to the NCPP or AAPOR, two American organizations that provide voluntarily ethics and transparency oversight for the pollsters that join them. And there seems to be no demand for such an organization by the media or by the public – despite some very public polling failures throughout Israel’s history (including some for which the pollsters were falsely blamed).
As a result, Israeli pollsters are free to hide their raw data from the public. They never publish the percent of the vote received by each party; they never mention how many voters remain undecided; they even sometimes don’t bother telling people the size of the poll. And forget about ever getting to see crosstabs.
In an attempt to address this lack of transparency, Central Elections Committee head Hanan Melcer published a new regulation last election that requires pollsters to provide the CEC with the raw data accompanying their polls, including the percent of the vote each party received instead of just the estimated equivalent seat count. It’s not everything a poll-watcher needs to see, but it would be a good start, if it were enforced.
Which it isn’t. The list of polls maintained on the CEC website is woefully incomplete, ridiculously slow to update, and many of the polls that do appear are missing many of the required details.
In theory, this lack of oversight shouldn’t matter. Pollsters are supposed to be regulated by the fear of being wrong. If I’m a media organization, and I purchase a poll from you, and that poll turns out to be way off – then in the next election I won’t buy another poll from you, will I? I’ll buy from somebody else, and you’ll go out of business. That fear should ensure that you don’t just make the numbers up.
But not if everybody else is just as wrong.
Herding
In December of 2018, Naftali Bennett and Ayelet Shaked made a dramatic and unexpected announcement: that they were leaving the Jewish Home to create a new party called the New Right. And in the polls that came out over the next week, something very curious happened: different pollsters pegged the new party at anywhere from 14 seats to 6.
This led to a lot of ridicule: Ha ha, those stupid pollsters don’t know anything! They’re all giving wildly different answers to the same question!
And then, magically, over the course of the next two weeks, the highest estimates started to come down and the lowest estimates started to come up. Fourteens were replaced with twelves were replaced with tens; sixes were replaced with eights and nines. There was the odd exception here and there, but from then until the election all of the polls were within one seat of each other.
And we all know how accurate those turned out to be, when the New Right failed to cross the threshold altogether.
See, what people didn’t realize in their criticism of the early polls was that that is what polls are supposed to look like. They’re not supposed to move in lockstep with one another. They’re not all supposed to give the exact same answer. A margin of error of 4% is equivalent to almost five seats in either direction (depending on the party size; margins of error are actually rather complicated). If a party is in reality going to get 60 seats, the same poll conducted twice in a row should return results of anywhere between 55 and 65 – a huge range! And one out of every twenty polls should come out with results outside that range, even if the party’s support hasn’t changed a bit!
But pollsters don’t let their polls swing as wildly as they should, because they’re afraid of being wrong. If I’m a pollster, and my calculations output the number “14” for the New Right that first week, I’ll publish it because I have nothing else to go on. But in that second week… well, all of the other polls last week gave answers from 6 to 10, so maybe…. you know, we happened to get five people answering the phone in the same neighborhood, so let’s just lower that 14 to a 12, it doesn’t really make that much of a difference, right? The Israeli political culture doesn’t expect me to publish my raw data, so nobody will notice that I put my thumb on the scale. That way I reduce the damage to my reputation if the guys who say 6 turn out to be right. And if it turned out the 14 was actually correct, then at least that means everybody was wrong, not just me, so I don’t lose market share. And hey, I was only two seats off and everybody else was even further away, so I’m still the best pollster around!
The trouble is, the pollsters who got a “6” are doing the same thing, in the opposite direction. And the polls slowly slowly settle on a consensus number that has little basis in reality.
It’s kind of like a Ouija Board, where a ghostly force seems to be moving the needle by itself – but in truth, everybody playing the game is pulling very slightly in a random direction, and the force you’re following is just whichever direction ended up with the most pull by happenstance. The pollsters are doing the same, pulling in random directions that first week… and then in the second week giving in to what they imagine everybody else is saying, because each of them trusts everybody else’s numbers more than they trust their own.
Conventional wisdom
A related phenomenon is what Nate Silver calls the First Law of Polling Errors: “Almost all polling errors occur in the opposite direction of what the conventional wisdom expects.” It works like this:
You’re a pollster, and over the course of a month you come out with polls showing a party at 6, 8, 10, 12, and 14 seats.
But you know that “everybody knows” this party is going to do very well. They keep saying that on the news. The reporters and pundits you talk to in private are all saying that.
So you’re inclined not to believe the 6 and the 8. Maybe you just don’t publish those polls. Or maybe you come up with some justifiable excuse to change them to a 10 and 11. Or maybe, if you’re an Israeli pollster with no oversight and no accountability, you just change them to a 10 and 11 without an excuse.
The result is that your polls have a much narrower spread than they should (from 10 to 14 instead of from 6 to 14) and their average is 11.5 instead of 10. But if polls are accurate – which they would be, if you didn’t mess around with them like this – the party actually will end up getting 10 seats. But the polling average predicted 11.5. Instead of overperforming, the party ends up underperforming. By letting the conventional wisdom influence your numbers, you’ve unwittingly brought about its exact opposite.
Nate’s First Law has been uncannily accurate in election after election throughout the world. The problem is, in Israel it’s not easy to identify the conventional wisdom, for several reasons:
- Pollsters are not allowed to publish polls in the last three days of the election, when a lot of people abandon the satellite parties of the Left and Right to vote for the flagship parties (in this case Likud and Blue & White). As a result, the flagship parties always overperform their polls for structural reasons, and the only question is by how much.
- Pollsters in Israel always give out all 120 seats in their poll results; they never bother telling us what percent of the vote is still undecided, which means all of the results are potentially skewable in different directions.
- The Israeli public has an irrational fear of seeing their votes “wasted”, so in the moment of truth they tend to abandon parties that are anywhere close to the 3.25% threshold.
It’s very difficult to disentangle these effects from one another, when pollsters so purposefully hide their decisions from the public.
For example, I asked Times of Israel reporter Raoul Wootliff for examples of the current conventional wisdom, and (among others) he cited the expected overperformance of the two flagship parties and corresponding underperformance of the satellite parties. More specifically, he pointed out that in April’s election the Likud and Blue & White received 35 seats each even though they polled at around 29-30 – and that this time they’re polling at 31-32.
There are a number of ways to intepret these numbers:
- The pollsters are telling the truth when they say the data shows a 32, and also told the truth last election when they said the data showed a 30. The large parties will overperform their polls by 5, just like they did last time, getting 37 each this time around.
- The pollsters are telling the truth when they say the data shows a 32 – but they lied in the last election, when their data showed a 35 and they disbelieved it. After all, no party had received 35 seats in decades, so they adjusted it down to 30 before publication. This time they aren’t afraid to report results that high, so 32 is what the parties will actually receive.
- The pollsters are lying when they say the data shows a 32, because they told the truth last time and got burned. Last time their data showed a 30 and the parties overperformed by 5; this time their data is showing a 27, and they’re adding the same 5 to it before publication because they think the same thing will happen. The problem is that it’s anybody’s guess whether that estimated addition turns out to be high or low.
- The pollsters are lying when they say the data shows a 32, because they told the truth last time and got burned. The data actually shows the same 30 that it did last time. The pollsters are afraid of being as off as last time, so they’re adjusting the number up by a bit. They just aren’t brave enough to adjust it by the full 5 points.
- The pollsters are lying when they say the data shows a 32, and they lied last time as well. The data is actually saying the same 35 that it did then, when the pollsters disbelieved it and adjusted it down to 30; they’re still adjusting it downward, only not by as much because now they’re concerned the 35 might be correct.
- The pollsters are lying when they say the data shows a 32, and they lied last time as well. They’re making the exact same mistake again. Last time it showed an accurate 35 and they reduced it by 5 before publication; this time it shows an accurate 37 and they’re again reducing it by 5 before publication.
And that’s not even half of the scenarios I could come up with. Nate Silver is lucky; he only has to worry about whether American pollsters are lying or not. We have to worry about whether Israeli pollsters are lying and how those lies interact with the ways the very structure of our electoral system messes with polling results. Not to mention how the last batch of polls, published four days before the election because of the legal moratorium on polling, directly changes how people ultimately decide to vote!
The effect on the media
So we have pollsters who are afraid to stand out from the crowd; we have a culture and a polling-moratorium law that combine to produce wild, unpredictable, and unrecordable swings in the last day before the election; and we have pollster reporting practices that hide vital statistics such as the number of undecided voters and the actual percent of the vote received by each party.
The result is that, at the same time as pollsters regularly fail to predict the results of the election accurately, the Israeli media has a massively overoptimistic belief in what polling can do. These are the same overoptimistic assumptions made the world over, but turned up to eleven.
American polls vary from day to day. Candidates tick upward and tick downward by one or two points pretty much constantly, and reporters don’t make a big deal about it. Every so often a poll comes out changing somebody’s percent of the vote by 5 or more, and everyone gets all excited… until it turns out to be an outlier.
Americans really shouldn’t get overexcited about outlier polls. But they’re nothing compared to the Israelis! I regularly see TV reports and news articles talk about how successful Party X’s campaign must have been this week because, as compared to the last poll, they moved up by an entire one seat. One seat is equivalent to 0.8333% of the vote! The poll’s margin of error is 4%, and you’re getting excited about a change of 0.8333%?
But if that’s all the pollsters are offering, you have to take your excitement wherever you can get it. We have dozens and dozens of polls that all claim to have that 4% margin of error… but there is so much herding going on, and so little transparency, that they’re always magically reporting results within 0.8333% of one another. If that’s the biggest swing you can expect to see, that’s the one that will make headlines. Even though it’s meaningless.
One analyst I spoke to said that the polls “seem pretty accurate since they are all more or less in line with the actual results in April and seem quite static since”.
On the contrary, the utter lack of any outlier polls is the strongest evidence that there is something very rotten at the center of the Israeli polling industry.