Sam Seitz

Predicting the future is incredibly difficult. Despite the numerous challenges, however, we are actually quite good at predicting certain things. For example, our models of demographic trends and disease spread have generally been extremely accurate. Moreover, even in areas where we cannot predict the future with much specificity, we can often still generate a range of probabilistic outcomes. In other words, we know enough to be able to recognize general trends that often explain certain situations. Unfortunately, despite huge amounts of research, theory development, and data aggregation, we are still not very good at predicting the future. This is particularly disappointing because the main justifications for studying political science, economics, and history is to learn important lessons from the past that can be re-purposed to predict future events and create a better world. Clearly, if we want to get better at predicting, we need to rethink our current approach.

This inability to foresee future events has been demonstrated on many occasions throughout history. For example, almost nobody predicted the collapse of the Soviet Union. Even as the U.S.S.R. stagnated economically and pursued more passive foreign policy, most experts assumed that the communist superpower would linger for many decades to come. More recently, it was revealed that many financial analysts are not all that great at predicting market trends. Very few foresaw the collapse of the housing market, believing that housing prices would always increase. In short, we are very good at predicting a future that does not differ too much from the present. But when major shocks to the system occur, nearly everyone misses them. This is as true for experts as it is for laypeople. Philip Tetlock has conducted path-breaking research on predictions, and he has found that experts are usually no better than non-experts at predicting the future. While specialists are much better at understanding the salient factors shaping potential future outcomes, they are usually also far less willing to consider other viewpoints because they tend to be very rigid in their worldviews. In other words, while they know a lot, they are less willing to listen to other knowledgeable people’s opinions because they pigeonhole themselves into one model or interpretation.

Another major obstacle to effective forecasting is group-think. Certain groups of people (the academy, the government, certain voting blocks, etc.) all begin to adopt a shared set of assumptions about the world, and group social pressures emerge that disincline people from voicing new or alternative points of view. This kind of group-think can be seen in both the economic and security sphere today. Economically, most experts have been incredibly Pollyannish in their estimates of future economic growth. They have consistently overestimated the stimulative effects of policies like quantitative easing, and many financial institutions have had to consistently downgrade their predictions about future growth. As Daniel Drezner writes in his recent report for Brookings“The Federal Reserve has persistently overestimated economic growth since the collapse of Lehman Brothers. Since the start of the Great Recession, the International Monetary Fund’s economic forecasters have had to continually revise downward their short-term projections for global economic growth. The failure rate has been so bad that the IMF devoted a chapter to the problem in its April 2015 World Economic Outlook. Its authors acknowledged that ‘repeated downward revisions to medium-term growth forecasts highlight the uncertainties surrounding prospects for the growth rate of potential output.’” If the economic community has been too optimistic, though, much of the foreign policy community has been too pessimistic. For example, many predicted that America’s relative power would decline a lot more than it actually has. The great rise of China and the BRICS at the expense of U.S. power has simply not occurred. Moreover, predictions of NATO’s demise have been flat-out wrong, and analysts arguing in 2014 that a war with China was all but inevitable now seem alarmist.

Group-think and extrapolation of the present into the future are not the only major problems with forecasting, however. There is also the problem of conflicting models leading to inconclusive predictions. For example, in a recent Foreign Affairs piece, Timothy Frye argues that standard IR theory is unable to predict Russian behavior because Russia does not conform to standard theories about the evolution of political regimes. On the one hand, IR theory and comparative politics predict that populist regimes like that of Vladimir Putin tend to have violent, abrupt ends, and they often do not successfully transition to liberal democracies. On the other hand, Russia’s GDP and level of education suggest that it should be far more liberal and democratic than it is today, and it therefore will likely transition towards a more open, rules-based regime. Obviously both views can’t simultaneously be correct. The problem is that Russia is an aberration from the norm, and generalizable theories and standard models are uniquely ill-equipped to offer predictions and insights about events that don’t conform to their core assumptions. Just look at how badly political scientists misunderstood Trump. This deficiency is frustrating because the events that least conform to the norm are often the ones that are the most important to get correct. Nobody cares if the standard economic models of oil pricing are off by a dollar or two, but people care a lot about whether or not the Putin regime in Russia is going to collapse in the near future.

So, what is the solution? To be honest, I’m not entirely sure. There is fascinating new research within IR that purports to be developing a much more rigorous and accurate way of forecasting future events using complex models and “Big Data.” If you are interested, I recommend reading this paper. Besides developing complex Bayesian models and game theoretic approaches, there are a number of simpler steps that can be taken as well. First, we need to stop extrapolating from the recent past to predict the future. The world is ever changing; instead of preparing for the last war, we need to look at broad meta-trends that will shape the future. Second, we need to stop pretending that a single event is statistically significant. For example, just because there have been a number of recent high-profile terrorist attacks in the U.S. and Europe doesn’t mean terrorism is statistically all that more threatening. Third, we need to be willing to listen to and incorporate new information. It’s not good enough to subscribe to one extremist political view, economic view, or social science school of thought. If you aren’t constantly evaluating new data and information that challenges your current assumptions, you are doing it wrong. Finally, we need to stop overusing historical analogies because they often lead us astray. Indeed, misapplying historical wisdom is almost certainly worse than being ignorant of history because it allows us to pretend that some current crisis is exactly like a past event. In our certainty, we forget to examine the unique peculiarities that define and differentiate current crises, instead erroneously assuming that our current situations will play out exactly as it did in the past. Ultimately, the only thing I’m certain of is that we will never be certain about the future. However, by embracing innovative models and minimizing cognitive biases that distort our view of the future, I do believe that we can develop more accurate guesses about events to come.