Nate Silver takes great pride in being less completely wrong than some of the other pollsters, in an article entitled "Why FiveThirtyEight Gave Trump A Better Chance Than Almost Anyone Else (Except the LA TIMES/USC and IBD/TIPP Tracking, Who, Unlike Us, Actually Got It Right). NB: I added the bit in parentheses. At no point does Silver mention any polling organization, or individual, who did correctly predict the election results.
Based on what most of us would have thought possible a year or two ago, the election of Donald Trump was one of the most shocking events in American political history. But it shouldn’t have been that much of a surprise based on the polls — at least if you were reading FiveThirtyEight. Given the historical accuracy of polling and where each candidate’s support was distributed, the polls showed a race that was both fairly close and highly uncertain.Translation:
This isn’t just a case of hindsight bias. It’s tricky to decide what tone to take in an article like this one — after all, we had Hillary Clinton favored. But one of the reasons to build a model — perhaps the most important reason — is to measure uncertainty and to account for risk. If polling were perfect, you wouldn’t need to do this. And we took weeks of abuse from people who thought we overrated Trump’s chances. For most of the presidential campaign, FiveThirtyEight’s forecast gave Trump much better odds than other polling-based models. Our final forecast, issued early Tuesday evening, had Trump with a 29 percent chance of winning the Electoral College.1 By comparison, other models tracked by The New York Times put Trump’s odds at: 15 percent, 8 percent, 2 percent and less than 1 percent. And betting markets put Trump’s chances at just 18 percent at midnight on Tuesday, when Dixville Notch, New Hampshire, cast its votes.
So why did our model — using basically the same data as everyone else — show such a different result? We’ve covered this question before, but it’s interesting to do so in light of the actual election results. We think the outcome — and particularly the fact that Trump won the Electoral College while losing the popular vote — validates important features of our approach.
- I'm a Gamma and I can't admit that I'm wrong without explaining how being wrong only proves that I was right to do what I did.
- Almost anyone else means anyone not KellyAnne Conway, Scott Adams, Nassim Taleb, Mike Cernovich, Vox Day, LA Times, IBD, or TPP Tracking.
- A 29 percent chance of winning is practically a near certainty. I mean, sure, you might have interpreted that to mean that Hillary was probably going to win, but that just shows how you don't understand polling as well as I do. The fact of the matter is that we were closer to getting it right than everyone else who didn't get it right.
- And by "29 percent", I of course mean 28.6 percent.
- And by "such a different result" what I mean is "exactly the same result as everyone else, except those other guys who actually got it right and whom I will carefully refrain from mentioning."
We strongly disagree with the idea that there was a massive polling error. Instead, there was a modest polling error, well in line with historical polling errors, but even a modest error was enough to provide for plenty of paths to victory for Trump. We think people should have been better prepared for it. There was widespread complacency about Clinton’s chances in a way that wasn’t justified by a careful analysis of the data and the uncertainties surrounding it.Translation:
- We strongly disagree with the idea that I could have been wrong. The Secret King is never wrong, by definition! You just don't understand how the appearance of being wrong only shows that I was mostly right, and that just goes to show how much smarter I am than you. Still undefeated!
- Next time, don't pay any attention to what I say before the election. Just wait until it is over, and then I'll explain what I meant and how that proves I am right. Always.