Dan Gardner is the New York Times best-selling author of Risk, Future Babble, Superforecasting (co-authored with Philip E. Tetlock), and How Big Things Get Done (co-authored with Bent Flyvbjerg). His books have been published in 26 countries and 20 languages. Prior to becoming an author, Gardner was an award-winning investigative journalist. More >

Forecasting the Trumpocalypse

After doing an interview with ING, I was asked after the election to comment on the widespread perception that forecasting had failed horribly, experts know nothing, etc. Following is an email I sent in response. A little addendum: How people relate to forecasting tends to swing from gullibility (credulously listening to people and forecasts who clearly should not be taken seriously, such as forecasts about the state of the global economy 50 years hence) to scoffing dismissal (thinking it's impossible to forecast anything of significance, a la Nassim Taleb). Both these extremes are ridiculous. Neither can withstand even modest critical thought. But they persist because they are easy and (to a degree) intuitive, they are in some people's interests, and because most of us don't spend any time seriously thinking about forecasting, as opposed to forecasts.
Overreaction to such a momentous shock is understandable but terribly misguided. And throwing our hands up in the air and shouting, “Polls are meaningless! Experts are all wrong! Nobody can forecast anything!” is wild overreaction.
First, many of the polls were not hugely off the mark. Remember, Clinton won the popular vote. And even tiny alterations in voting could have tipped the result the other way. If, for example, there had been modestly higher black voter turnout in just three states — Wisconsin, Michigan, and Pennsylvania -- Clinton would be the president-elect and we would be having a very different discussion.
What we should draw from the poll failure, such as it was, is a lesson that should have been obvious all along: Reality is immensely complex, our brains are fallible, our tools for measuring it are crude relative to the difficulty of the tasks we set for them, and therefore error is always possible. There is no certainty in this world. Never has been, never will be. (Yes, that’s a forecast, and yes, I am stating it with certainty. For the sake of intellectual rigour, I will add “barring currently inconceivable scientific breakthroughs in some unimaginable future.”)
But the point I would really underscore flows out of a story we tell in Superforecasting: The former US Treasury Secretary Robert Rubin said in an interview with us that when he briefed even sophisticated policy-makers and told them something had an 80% chance of happening, he “almost had to pound the table” to make them understand that that meant there was a 20% chance it would not happen. That’s because we naturally incline toward binary thinking: Something will happen or it won’t happen. Thinking in probabilistic terms is a struggle for most people.
That matters for a couple of reasons. One, when people say experts were “wrong” when they said Trump had only a 20% or 10% chance of winning, we cannot be sure. The smaller the probability the more likely it is off. But, again, there is no certainty. A probabilistic thinker gets this easily, but the people who confidently proclaim the forecasters wrong are committing what we call “the wrong-side-of-maybe fallacy.” So let’s not only be humble and cautious about our ability to forecast. Let us be humble and cautious in judging forecasts.
The second reason why the human struggle to grasp probability matters is that it may have been a factor in the outcome.
Consider: In the last couple of days of the election, the decline in Clinton’s poll numbers had been arrested and most analysts gave her between a 70% and 95% chance of winning. And yet the tone suggested more than that. Many, many observers noted the probabilities and then said words to the effect of “she has it in the bag.” In other words, they explicitly or implicitly acknowledged that the election was not a sure thing but proceeded to talk as if it were. You could see that in the shock after Trump won: So many people had treated a high probability as a certainty. Or to put that another way, they treated a low probability as an impossibility. That is not a small mistake. As I reminded folks on Twitter shortly before the election, if you play Russian roulette there is only a 16.66% chance of dying but that does not mean you should relax and pull the trigger.
We now know that Clinton lost because Democratic turnout was way down from the previous election. That was particularly true among black voters in some of the key states that swung the election. So here’s my hypothesis: Democratic supporters may have heard the probabilistic forecasts but there was no one “pounding the table” to make them understand that an 80% chance of a Clinton win means a 20% chance Trump would win. On the contrary. The tone of pundits assured them that 80% meant it was certain Clinton would win. So some didn’t bother to vote.
Was that enough to make a difference to the outcome? I don’t know. But given the margins involved, it seems to me quite plausible that the answer is yes.
Low probability is not zero probability and when the consequences of that low probability coming to pass are great, it should not be ignored: However many chambers a revolver has, it must be handled with care if even one contains a bullet.