- 98% to 99%;
What are these numbers? The odds that you will love this article? No – clearly that is 100%. These were the odds of Hillary Clinton winning the presidential election on November 8, 2016 according to the various forecasts of:
- Princeton Election Consortium (98 to 99%);
- The Huffington Post (98%);
- Princeton Election Consortium conservative forecast (95%);
- New York Times (84%);
- FiveThirtyEight (71%).
The old saying, “There are three kinds of lies: lies, damned lies, and statistics,” seems appropriate here. So this leaves me with two questions (there are many others, but time and space is limited): (1) What went wrong with all of these forecasts; and (2) Should business owners even attempt to rely on any type of forecast model (their own or third- party) given their inherent uncertainty?
We will not know the answer to the first question with certainty until more information is available; however, some theories have been put forth and are interesting to consider. For this article, I am going to rely on the theories posited by Nate Silver of www.fivethirtyeight.com. While the FiveThirtyEight forecast still gave Donald Trump only a 30% chance of winning the presidency, its model was on the higher end of probabilities. In fact, FiveThirtyEight took a lot of criticism for keeping Mr. Trump’s odds so “high” (many critics felt it should be 10% or less).
- Small, systematic poll error – While Ms. Clinton was leading in a large number of national polls, her lead was not so great that a small error could have a large impact. As Mr. Silver states, there are diminishing marginal benefits of leading each new poll, especially if there is a systematic error across all of the national polls (although the public may view leading in a large number of polls to be a very strong indicator). If Ms. Clinton had received just a 2% bump in each state, the polls would have correctly predicted the results of 49 out of 50 states.
- Midwest Collapse – There was much talk before the election of Ms. Clinton’s firewall in the Midwest involving (among other states), Michigan, Pennsylvania, and Wisconsin. However, Mr. Trump broke through that firewall, paving the road for his victory. Had Ms. Clinton took each of those states, she would have received 278 electoral votes.
This is where a systematic error across a region may have had a major impact. FiveThirtyEight’s model considered that if an individual state’s polling had an error, this error may be correlated across states in the same geographical region. This assumption increased Mr. Trump’s odds because it allowed for scenarios where there were correlated errors across state polls, especially in the Midwest, which was so critical to this election.
- Large number of undecideds – The large number of undecided voters, as compared to past elections, created more uncertainty in the various polls. While it is still too early to know for sure, exit polls seemed to indicate that undecided voters generally voted in Mr. Trump’s favor.
Each of these explanations seems at least somewhat plausible (and there are many other theories as well). My general feeling is that the world is becoming increasingly complex and those complexities are often interrelated in ways that are difficult to anticipate (making forecasts inherently more uncertain), which dovetails well into the second question: Given all of this uncertainty, do forecast models and, in specific, financial models, still provide value?
I believe the answer is still a resounding yes, but expectations regarding forecast models, their uses, and their reliability are important. The idea of any model is to represent a “simplified” version of expected performance. Obviously, no one can create a model that perfectly mimics all of the underlying variables that affect a “real-life” result. It is important to remember that the results of any model are often a best estimate and any model that states the odds of something are 99% should probably be taken with a grain of salt.
The goal of a model is to provide a tool to help manage your business or understand the impacts of key events, not predict the future with complete certainty. Always recognize the uncertainty in any forecast by having a model that is flexible enough to adapt to changes in underlying assumptions. Further, given significant uncertainty, sensitivity analyses can be extremely valuable because knowing the impact of changes to key assumptions can help you be ready for the unexpected. By doing those things, you are creating a model that creates real value by allowing you better visibility into your business even if the model is not always “right”.
Additionally, the “failure” of a model is an opportunity to learn and revise your model. The best forecasters are constantly learning from past mistakes.
Unfortunately, the topic of inaccurate forecast models is not new. For more information on the development and use of models, check out this Insight from 2010: https://www.schneiderdowns.com/our-thoughts-on/business-advisors/Financial_Models.