The futility of trying to predict the future

Photo by Jen Theodore on Unsplash

We live in a world obsessed with prediction. In work, play and everything in between, we are constantly trying to surmise what will happen and when.

It’s become a game – a game with no losers. Those who make the “right” calls amplify their supposed brilliance and raise the value of their personal brands. And those who are wrong keep quiet, hoping they will be right next time. 

But who are we to judge? None of us are immune to this ritual. We have to observe it; it’s part of what makes us human and elevates us above other species. Our very survival depends on making successful predictions. Stepping out in front of a bus will kill me. Drinking water will keep me alive. These are simple, short-term predictions that keep us safe and well. We are good at making them. But we are remarkably bad at forecasting over the long-term. For evidence of this, look no further than the weather

Butterflies and thunderstorms.

According to the US National Oceanic and Atmospheric Administration, a 7-day forecast can accurately predict the weather about 80% of the time, and a 5-day forecast gets things right 90% of the time. However, a 10-day (or longer) forecast is only right about 50% of the time. This is intuitively correct – anyone who’s ever organised a wedding or booked a beach holiday knows the feeling of the long-term weather forecast slowly converging with the prevailing conditions (typically rain if, like us, you spend a lot of time in the UK). You may as well toss a coin.

With incredible volumes of data and computing power at our disposal,  why can’t we forecast weather beyond a handful of days? It’s simply that the world’s atmosphere is too chaotic to successfully model. We cannot escape the irrefutable law of nature that means minuscule changes have huge impacts on the development of a dynamic system. This “Butterfly Effect” was beautifully coined by mathematician and meteorologist Edward Lorenz in 1972, and in reality, it means that a single model run on different occasions with very subtle differences in base conditions will spit out wildly different forecasts.

Here we get to the root of the problem with forecasting in general. We have a measurement problem. Small imperfections can trigger massive changes in outputs. These are amplified the further out we try to predict. Indeed, researchers who study the theoretical limit of accuracy in weather forecasting believe that we’ll never be able to predict thunderstorms more than a couple of hours in advance. It’s impossible. 

The Butterfly Effect isn’t only relevant to meteorologists and wedding planners. It’s also an instructive way to think about markets. Just as perfect weather forecasting relies on perfect knowledge of the atmosphere and perfect models, so too does the forecasting of asset prices. But the world’s isn’t perfect, and markets certainly aren’t. So we are stuck with very short term predictions that are relatively accurate, and long term forecasts – however well researched – that are nothing more than speculation. As a wise man once noted, “An economist is an expert who will know tomorrow why the things he predicted yesterday didn't happen today.”

Pills, horseback riding and claret.

The predicament facing futurologists runs deeper than the quality of data and models at their disposal. To understand the slippery nature of prediction, we must look backwards to 1734, when a 23-year-old was grappling with the toughest period of his life. 

Renegade Scottish philosopher David Hume was desperately trying to distil his subversive thoughts on the nature of thinking and living, but he was shrouded in anxiety and depression. His physician prescribed “antihysteric pills”, horseback riding, and claret. This lethal-sounding concoction must have done the trick, because Hume came up with one of the greatest books in the history of philosophy – A Treatise of Human Nature.

Hume’s thinking on the problem of induction has ramifications for investors’ adherence to the cult of prediction. He argues that we tend to rely on past outcomes to predict their future experiences, even though there is no evidence that past and future occurrences are correlated. This inductive process might work most of the time, but Hume argues that it holds no rational justification and should therefore not be relied upon. He illustrates this point with the example of a six-sided die. If the die is rolled three times and the first two rolls result in a five, induction should convince us that the third roll will result in another five. But there is only a one in six chance of that happening. The previous two rolls don't have a legitimate effect on the outcome of the third roll. Hume goes on to argue that “assumption” and “custom” are the reasons that humans rely on the past when looking towards their future, and that these are incompatible with logic and rationality. 

In other words “past performance is not an indicator of future results” isn't just a cut-and-paste line from a PPM – it’s actually true.

A serious undertaking. 

Hume was skeptical about our ability to predict the future. Centuries later, Nassim Nicholas Taleb has written extensively on this subject within the context of markets. He coined the term “Black Swan” to describe influential events that are improbable based on information and statistics, but are retrospectively predictable. Many people assume that Coronavirus falls into this category, but was a global pandemic really improbable considering the level of interconnectivity in our world and the lack of investment in public healthcare globally? Taleb argues that humans are good at generalizing in some areas but not others – including markets.

Humans might be poor at predicting the future, but it seems some are better than others. Psychologist Philip Tetlock has dedicated his career to the psychology of prediction. His book Superforecasting: The Art and Science of Prediction (written with the help of journalist Dan Gardner) tells the story of The Good Judgment Project, in which Tetlock recruited more than 20,000 people to make 500 predictions on questions ranging from the likelihood of political protests in Russia to the path of the Nikkei index.

This was a serious undertaking, sponsored by Iarpa, the research and innovation arm of the US intelligence community. Tetlock proved that some people are better at predicting than others, but he didn't stop there – he unpicked what they were doing differently to everyone else. Tetlock observed that the “superforecasters” of the book’s title were analytical and numerate, but also intellectually humble and self-critical. They were not ideological. In fact, they were quick to change their minds in response to new data, and they were receptive to different perspectives

There is an uncanny link to Hume’s thinking here. His revolutionary science of the mind, based on scientific experimentation and observation, led him to conclude that there was no soul, no self, and no way of knowing for sure that we are right about anything:

“When I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe anything but the perception.”

This is strong stuff. Hume is saying that when we scrutinise everything we think we know, the foundations of our world crumble. This is not to say that we should stop living our lives, going about our daily business and trying to make money in markets. Quite the opposite. If the metaphysical doesn't matter, if objective reality must elude us, we must learn to get on with our lives and appreciate the world for what it is. And by abandoning the idea that we are unique, gifted, destined for glory, we might just become humble enough to listen to others, learn from them, and grow into something much more powerful.

The path less travelled.

The futility of prediction has important implications for investors. And yet, we cannot shed the cult figure of the superstar stockpicker, the contrarian genius who can go on CNBC and tell the world why it is wrong and he is right. This approach is effective at shifting product and raising assets, and it’s not going away any time soon. Indeed, social media has created a new breed of hubristic braggers and blaggers. But the truth is, investors rarely prosper by making big contrarian calls – they prosper by exercising intense discipline over their trades. This isn’t headline-grabbing stuff. It doesn't sell fund units. But we passionately believe it is true. 

Like the participants in Tetlock’s study, we find it easy being humble, because we have a lot to be humble about. And we find it hard being ideological, because the world is so complicated and nuanced. It turns out the path less travelled is not even yet a path – the way is yet to be cleared. It is more a way of looking, and hopefully, seeing. By acknowledging the futility of predicting the future, we are no longer blinded by our own hubris – we are open to pursuing new opportunities.

Perhaps rather than trying to predict the future, we should seek to envizage the multiple paths that lead us there? Or, we could go one step further and do as suggested by the great computer pioneer Alan Kay, who believed, “The best way to predict the future is to invent it.” We are in a unique position with our business, one that enables us to invent our own future and play a role in creating the futures of our partners and clients. And for that, in these difficult and unpredictable times, we are thankful.

InvestingEdward Playfair