Nate Silver Challenge: What Went Wrong?


As many of you have pointed out, my election projections were, well…wrong. So what happened? Before we start unraveling all this, let’s compare the actual results to my projections. I was wrong on four states: Florida, North Carolina, Pennsylvania, and Wisconsin, and though not called yet, Trump looks likely to win Michigan. Florida and North Carolina are not big surprises since they were tossups on my map, but the other three were a bit of a shock since polls showed Clinton with a consistent lead in all of them throughout the entire election cycle. And, of course, this shift of five states had a huge impact on the results in the electoral college and the winner of the election overall. So, to summarize:

  • I was wrong on five states.
  • I was way off on the electoral vote.
  • I missed the most important thing, the winner of the election.
Pretty bad, huh? Yes, I suppose it is, but I wasn’t the only one who was wrong. Pretty much every other data-driven election model had Clinton winning relatively easily. Here’s a rundown of some of those:
  • Moody’s Analytics had Clinton winning 332 electoral votes to Trump’s 206.
  • The New York Times had Clinton with an 85% chance of winning the election.
  • The Huffington Post estimated a 99% chance of a Clinton win.
  • The LA Times had Clinton winning 352 electoral votes to Trump’s 186.
  • Sam Wang, of the Princeton Election Consortium, predicted the likelihood of a Clinton win at over 99%.

And, of course, Nate Silver and FiveThirtyEight got it “wrong” as well (I’m putting “wrong” in quotes here because it’s not really that simple and we’ll get to that shortly). His state-by-state projections matched mine exactly. So, though I missed the mark by quite a bit, I suppose I am in good company. Sure, there were people who predicted the winner of the election accurately, but most of those were partisan pundits, rather than data-driven forecasts. So what caused all this?

Original Premise
Before we discuss the reasons why we are all so wrong, let’s revisit the basic premise of my “Nate Silver Challenge”. This challenge was based on two basic theories:
  1. It can’t be that difficult to predict the winner of each state.
  2. A simple model, using just a few key data points, would work just as well as a more complex on like the one used by Nate Silver.

Let’s start by addressing the first point. Clearly, I was wrong here as proven by the failure of my model as well as just about every other predictive model. Why was this the case? As one commenter noted on my final election post, this was a classic case of “Garbage in, Garbage Out”. The polls, which acted as the primary source for my and others’ models, were simply off. This was not so much an issue of bad models, but rather bad input data. When the inputs are incorrect, it does not matter how sophisticated your models are.

The accuracy of the second part of my theory is much less black-and-white. As we’ve established, the input data was fundamentally flawed in this election cycle, and that let to inaccuracies in just about every predictive model. But the fact stands that my approach was really no more inaccurate than everyone else’s, particularly Nate Silvers. As I’ve noted, our projections were exactly the same going into election day. That being the case, I do think there is some plausibility to my theory that a simple model works just as well as a more complex one.

Flaw in My Argument
But there is one fundamental flaw in my argument and it’s something I’ve been fully aware of the entire time I’ve been conducting this challenge. You see, Nate Silver’s model is probabilistic while mine was not and that makes a big difference. He and his team at FiveThirtyEight have designed a predictive model, which includes numerous variables, weights polls based on their size, historical accuracy, etc., adjusts for demographics, adjusts for trending, and numerous other factors. All this is compiled together to create an adjusted polling average. From there, they run 10-20 thousand simulations of the election in order to obtain a probability of winning for each of the candidates. The result is not a black and white prediction of a winner, but rather a probability of one result over another. These probabilities exist everywhere on Silver’s model—who will win each state, electoral vote counts (he did not simply count electoral votes based on his projected state winners), and the overall election. In the weekend prior to the election, he had Trump with a 35% chance of winning. For this, he was roundly criticized, most notably by the Huffington Post’s Ryan Grim (remember that Huff Po had Clinton with a 99% chance of victory). Grim criticized Silver of essentially adjusting polls to fit his own beliefs about the state of the race. Of course, he offered little evidence for this theory and it seems relatively clear to me that Silver did nothing of the sort. Silver adamantly defended his model, noting repeatedly that the model is set in stone. You feed the data in and it outputs results—he does not adjust the inputs or the outputs.

In a sense, Nate Silver was the most accurate of anyone. No other probabilistic model had Trump with any reasonable chance of victory. Despite the problematic input data, his model still gave Trump a pretty good chance of victory. So, I think this is at least some vindication for Silver and his model (Grim has since apologized for his criticisms).

One of the problems with using probabilistic models to predict election results is that we have an all-or-nothing electoral system. The precision of probabilistic models don’t really matter that much when all people care about is who will win overall. But there are cases where precision can be a matter of life and death. Consider the case of meteorologists who use such models to predict the paths of hurricanes (Nate Silver actually talks about this in The Signal and the Noise). This does not have a simple binary answer like the winner of each state in an election—it’s not just about predicting if a hurricane will or will not make landfall, but specifically where it will make landfall and what path it will take from there. Accuracy in such a prediction is of critical importance as even the slightest increase in the precision of such a model could potentially save lives. A model saying that a hurricane has a 35% of making landfall in your town (like Silver’s chances for a Trump victory) is much different than one giving it a 1% chance (like the Huffington Post’s chance for a Trump victory).  In such a situation, my guess is that you’d much rather have one of Nate Silver’s models than some simplistic model like the one I’ve created for the election (or just about anyone else’s models, for that matter).

The Future of Election Prediction
For the next four years, I’m assuming many people will be discussing whythe polls were wrong and adjustments will be made. But, after this election, will we ever be able to trust polls at all? Maybe not, but perhaps polls are not the best input data for election predictive models? At a minimum, we may need to look at other types of data. For instance, I’ve seen some people calling for use of Big Data in election forecasts. So, instead of just asking people who they plan to vote for and how likely they are to vote, what if we look at people’s social network activity and build algorithms to understand how and if they will vote? This, of course, has major flaws as well because there are many people with no social media presence whatsoever and there are many others, like myself, who avoid posting their own political preferences online. But the point is that, perhaps, we need a broader set of inputs to our models, rather than relying so heavily on old fashioned polls.

Or, perhaps, we just need an entirely new type of predictive model altogether. A number of months ago, I read about a model created by Allan J. Lichtman, a history professor at American University (some of you have also made note of this in comments on my latest post). His model has accurately predicted the results of presidential elections for the past 30 years. Against all odds, he predicted a Trump victory using a model he documented in his book, Predicting the Next President: The Keys to the White House. His model is based on a set of 13 true/false statements:
  1. After the midterm elections, the incumbent party holds more seats in the U.S. House of Representatives than it did after the previous midterm elections.  
  2. There is no serious contest for the incumbent-party nomination.
  3. The incumbent-party candidate is the sitting president.
  4. There is no significant third party or independent campaign.
  5. The economy is not in recession during the election campaign.
  6. Real per-capita economic growth during the term equals or exceeds mean growth during the previous two terms.
  7. The incumbent administration effects major changes in national policy.
  8. There is no sustained social unrest during the term.
  9. The incumbent administration is untainted by major scandal.
  10. The incumbent administration suffers no major failure in foreign or military affairs.
  11. The incumbent administration achieves a major success in foreign or military affairs.
  12. The incumbent-party candidate is charismatic or a national hero.
  13. The challenging-party candidate is not charismatic or a national hero.

If six or more of these indicators are false, then the challenging party will win the election. Otherwise, the incumbent party will win. What’s interesting about this model is that polls play no role whatsoever. There are, of course, many criticism of this model (thank you to Meta S. Brown who pointed out some of these out on a comment on my pre-election post). In 2000, for example, his model predicted that Al Gore would win and Mr. Lichtman claimed victory because he won the popular vote, while he is also claiming victory this year, despite the fact that Clinton won the popular vote. Perhaps the biggest criticism is that these keys are largely subjective. Different people could very likely give different answers to each question, which would allow that person to put his or her own personal biases and opinions into the model. In this sense, the model fails to be scientific as the results cannot necessarily be independently reproduced. Interestingly enough, one of Professor Lichtman’s biggest critics is Nate Silver himself. His critique, which was written just prior to the 2012 election, discusses some of the above issues as well as a number of others; you can find it here. But, criticisms aside, Mr. Lichtman’s model does illustrate the point that there are a number of different ways to project elections and not all of them use polls as their primary input. 

What Does this Mean for Predictive Analytics?
Should we just give up predictive analytics altogether? No, but I do think that this should act as a cautionary tale. Predicting the future is hard. There are often an endless number variables, numerous potential sources of data, and many different ways to bring it all together. And presidential elections may very well be one of the worst possible use cases for predictive models, since they happen only once every 4 years, making the models very difficult to test, validate, and tune.

But perhaps the biggest thing we should learn from this year’s failures is the criticality of accurate source data. If your source data is wrong, your model will be wrong. We can adjust for this uncertainty through use of different methodologies and the addition of other data sets, but the best way to ensure accurate predictive models is to ensure accurate input data.

Let’s Give Nate a Break
So, even though I’ve been hard on Nate Silver for the past few months, I really admire what he is doing and I think we should give him a break. Though his model was not as accurate as it was in 2008 and 2012, it still outperformed every other data-driven model, despite the inaccuracies of the input data. In the same way that Tiger Woods attracted a whole new group of people to the sport of golf, Nate Silver and his team have played a big role in making data sexy. That’s obviously good for data geeks like myself, but it’s also good for all of us. By popularizing probabilistic predictive modeling, he has helped to show us how powerful such modeling can be, which inevitably leads to people employing these methods to understand and solve real-world problems, be it disease, terrorism, weather, poverty, or anything else you can imagine. And, of course, solving some of these big problems is of way more value than predicting an election.

Ken Flerlage, November 14, 2016

I’m taking a break from politics for a while (unless I find something so interesting, I just can resist). But, if you have enjoyed my posts, please check back as I’ll still be writing about other topics including sports, science, and religion.


1 comment:

  1. Its 2020 and you were wrong again, Nate. And i know why. The traditional polls haven't recalculated the the arithmetic regarding turnout. Traditionally, a politician would get x% turnout. In other words, if a respondent says to a surveyor that they are voting for candidate A, the polls calculated the % chance of that voter actually turning out.

    Trump blew through that algorithm in 2016 and now in 2020. It has to do with the way he campaigns and the things he says. When a voter says he/she will vote for Trump, the % chance that they will turn out is much higher than the model traditionally predicts. You have to change your turnout model.

    ReplyDelete

Powered by Blogger.