Quantum Mechanics, Many Worlds and Free Energy

Photo by Roman Mager on Unsplash

Photo by Roman Mager on Unsplash

Just over a week ago, a podcast by Caltech professor Sean Carroll caught our attention: discussing the broad content of his book “Something Deeply Hidden” at a Talks at Google event, he touched on the Everettian school of thought in Quantum mechanics that there exists many worlds, and that the reality which we perceive is simply one of those many possible outcomes. The video is definitely worth a watch, followed by a couple of hours of mind-bending puzzle solving.

Grappling with the notion of parallel universes quickly took everyone down the rabbit hole of Quantum mechanics. To be clear, none of us have any professional background in Quantum mechanics, physics or engineering, and we certainly don’t claim to have gained complete understanding of the work of the legends - past and present - of the world of Physics. Perhaps consolation can be found in the words of one of the fathers of Quantum Mechanics, Richard Feynman: “I think I can safely say that nobody understands quantum mechanics.

Yet there was something profound about these notions. They resonated with us. 

Many worlds, Entropy and the Three-Body Problem

In some ways, the notion of “many worlds” - as much as it sounds like something straight out of science fiction - was more fascinating than bewildering to us. We strongly believe that there is no way for anyone to say with any degree of certainty what will happen in the future, but rather that there is a range of possible outcomes. And while the idea of every decision we make creating multiple copies of ourselves across the universe sounds strange - since no one actually feels like they’re splitting - it is perhaps better thought of as every decision we take bringing us onto a different possible node on a probability tree.

That probability tree, infinite as it seems, nonetheless represents a well-defined set of possible outcomes. Some are more probable than others, but all of them possible. Yet we, human as we are, generally don’t like thinking about life in probabilities. We prefer thinking in terms of black and white, yes and no, wrong and right.

Added to the mix is the idea of Entropy. Common definitions of the term tend to describe it as “disorder” in a system. A more accurate definition would be the multiplicity of a system: the number of ways the items in the system can be arranged. The higher the entropy, the number of permutations can be created, and the more complex the probabilities become in terms of possible outcomes of any form of interaction.

Nowhere is systemic complexity better expressed than in the physics problem from which we derived our name: the Three-body problem. We introduced the idea of the name back when we first started the business, and the conclusion remains the same: the behaviour of complex systems comprising more than 3 bodies cannot be simplified into a generalised formula of y = f(x). Put differently, they can be calculated, but not consistently predicted, due to the sensitivity of what is effectively a chaotic system to the initial conditions which are observed. Simply, any small variation in how things start have a massive impact on the eventual outcome of things.

A scientific approach to taking risk

If the threshold for complexity is three, under controlled laboratory conditions, imagine how complex and chaotic the real world is. Companies themselves comprise thousands of employees, and even more suppliers, clients and other stakeholders. Their stock (or tokens) is traded in open markets with a myriad of counterparties determining fund flows with differing trading objectives, themselves affected by even bigger-picture drivers like macroeconomics, politics, sentiment and (of late) Twitter.

Put all of these agents together and you get a market: a n-body problem (where n is now a massive number, much bigger than 3) that is chaotic and high entropy, with a large number of possible outcomes which can be attained by an even larger number of possible pathways. The result is a supercomplex system, on which we need to attempt a supercomplex derivation of a “formula”, the results of which are themselves likely to be unreliable.

And despite all of this, the classic tendency of an investor is to say: “Don’t worry, I’ve got this. This is how things will pan out - according to my investment thesis. The market is wrong, I’m right, and eventually they will come over to my way of looking at things.”

Sounds ridiculous? It is. And that’s why we never want to catch ourselves claiming to have cracked the secret of the universe and offering to share that secret formula for success. It doesn’t exist.

Ask not whether you are right, but whether you are wrong

Statisticians take a different approach to hypothesis testing, although it is perhaps this commonly-cited (but technically inaccurate) definition from Investopedia that underscores the fallacy of human cognition: 

Hypothesis testing is used to infer the result of a hypothesis performed on sample data from a larger population. The test tells the analyst whether or not his primary hypothesis is true. Statistical analysts test a hypothesis by measuring and examining a random sample of the population being analyzed.

In practice, the hypothesis test takes a null hypothesis, and tests the results from the sample to see if the sample is statistically different from the null hypothesis, given a certain level of statistical significance. If the sample outcome is statistically significantly different from the null hypothesis, the null hypothesis is rejected.

And here’s the crux: if the sample outcome is not statistically significantly different, the conclusion is that the null hypothesis cannot be rejected.

Semantics, perhaps, but nonetheless critical: a statistician never states that the null hypothesis is “accepted”, but only that it “cannot be rejected” based on the evidence. A test doesn’t tell whether or not the primary hypothesis is true, but only whether it is false, or it cannot be proven to be false.

Seeking to “fix” and “complete” things is a very human tendency, but this same tendency that leads us to declare that we “know” what the truth is (that we are “right”) ignores that fact that our world is inherently probabilistic, and that nothing can be known as “for sure”. More likely than not, we end up constructing a reinforcing narrative to explain and rationalise why our opinion and our view is the correct one. This leads us to be increasingly walled off from the possibility of being wrong.

When it comes to probability, less is more

Making an investment decision is very much like hypothesis testing: the null hypothesis can take the form of any investment case to be made, but at no point will we be able to say that we are right, until after the fact.

Testing these hypotheses, however, can turn out to be extremely complex - and potentially costly - exercises. So the question is how, in a world of complex interconnected probabilities and multiple pathways, can we improve the odds of making the right moves.

We think that the answer comes from the work of yet another established scientific giant located just down the road from us here in London: Dr. Karl Friston, and his work on the “free energy principle”.

One of the most influential neuroscientists around, with an h-index (a measure of the impact of a scientist’s published papers) almost double that of Albert Einstein, Dr. Friston’s work on the “free energy principle” has found applications far beyond the scope of neuroscience, with applications in Artificial Intelligence, biology and psychiatry. Again, we do not purport to be experts in the “free energy principle” - it is said that the only person who understands Karl Friston’s Free Energy Principle is Karl Friston himself - but the simplified definition of the principle makes perfect sense to us.

If “free energy” is broadly defined as “the difference between the states you expect to be in and the states your sensors tell you that you are in”, then the principle is that all complex systems do best to resist entropy and chaos by minimising free energy i.e. by minimising the risk of surprise.

Adapted to our process, the idea is to harness the chaos of the system to the upside (in which case, more entropy provides opportunities for outsized rewards), while limiting the probability of chaotic downside losses, firstly through a thematic approach to locating compelling, asymmetrically favourable opportunities, which itself excludes large swathes of the typical investment universe, then through the use of stop-losses. Stop losses serve to close out risk exposures below a certain level, effectively truncating the downside risk inherent in any open position, much like removing a portion of undesirable outcomes from a probability distribution (e.g. taking out the blue area of the curve):

pasted image 0 (15).png

Perhaps most importantly, the process is designed to be executed upon by the entire team, the aim being to mitigate our human biases and our occasional idiocy in coming up with justifications to circumvent the process we ourselves put in place for the sake of ego. The priority of the team is executing well on the process.

Narratives and science

Of course, it goes without saying that extrapolating principles governing the movement of subatomic particles to the behaviour of market prices is a bit of a logical leap. Yet to some degree it makes sense: after all, as individuals we are fundamentally made up of subatomic particles, organised into an immensely complex organism, each one of us a part of an infinitely chaotic and complex marketplace.

As much as Schrödinger’s equation defines the full set of possibilities and outcomes at the quantum level, we look at markets as having a massive set of possible outcomes. Unfortunately we don’t have an equation that defines that full set of possibilities, but at the very least we are able to observe the impact of changes in narrative in the prices of instruments as they trade.

In the face of an ever-more complex market environment, our approach must become increasingly scientific. Rather than allowing our human emotions and biases to entangle with the narratives that we inadvertently buy into, our focus has to be on constructing and executing on a scientifically objective investment process, implemented by the entire team as a group with different functions, in which, having first excluded significant downside, an attractive and asymmetric upside opportunity is present.

Put differently, we don’t know how right we might turn out to be, but if we’re wrong, we will be able to quantify and control the downside. No single individual dominates the process, but everyone seeks to ensure the process is executed at its best.

Ending once more with the wisdom of Richard Feynman:

“You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I'm not absolutely sure of anything and there are many things I don't know anything about. [...] I don't have to know an answer, I don't feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is so far as I can tell. It doesn't frighten me.”

With science as it is with taking risk, our strategy is much less about ideas and beliefs than it is about process. This process, in our opinion, is the recipe for maximising the odds of surprising to the upside.