Zero fixed costs: a game changer
Zero marginal costs: a game changer
“Every company is trying to be a tech company.”
That was the joke making the rounds within the investment community a couple of years ago.
To some extent, the line was uttered with some degree of derision, implying that a “non-tech” company was masquerading as a tech company in order to get the market to pay a premium valuation for its stock or view its prospects more favourably.
Now “tech” is so prevalent that it’s perhaps not a long shot to say that every major company is a tech company: miners have tech for imaging and prospecting reserves; farmers have tech for managing irrigation; artisans have tech for planning distribution and marketing… the list goes on.
Yet as we’ve pointed out before when we drew the line between “fintech” and something more profound like Decentralised Finance (aka DeFi), the application of tech can happen to various degrees. Sometimes tech can be applied to allow you to do something in the same way, but much more quickly and efficiently. That’s interesting, but inherently self-limiting in the business sense, since the only benefit would be decreased costs which ultimately get matched as the rest of the pack catches up.
What is more interesting is when technology fundamentally rewires the DNA of microeconomics, bringing about new business models, approaches and philosophies.
Anything as a service.
It may surprise you to know that the idea of “Software as a Service” is not a recent invention. SaaS has been a thing since the 1960s, involving “dumb” terminals (keyboards and monitors without CPUs) that were networked into a mainframe or mini-computer, allowing users to access software and data stored on the mainframe in what was effectively a “time sharing” system. This was the starting point of what eventually became Local Area Networks (LANs), then the Internet and everything else we are familiar with today.
While technological developments have been nothing short of astounding examples of human ingenuity, to focus on them would be to miss the bigger point: by sharing infrastructure across multiple users instead of buying computing power on a per-user basis, economies of scale were gained. At the firm level, it was a huge fixed cost shared across many individuals; but attributing costs versus revenues at an individual level saw a fixed cost (one computer per individual) turning into a marginal cost (attribute cost per use).
Although the overarching principle was extremely well-established, the concept of “sharing” never really expanded out of an individual organisation: while individuals within a firm could share infrastructure, it was extremely rare for companies to share significant infrastructure.
Unsynchronised developments.
To be sure, competition was probably a major reason why companies weren’t keen to share infrastructure. It was arguably a very different matter to share a building with a competitor than to share IT infrastructure (or is it?). But the other major reason was probably that there wasn’t enough resource to be shared in the first place: As the cost of personal computers, made popular by IBM, started to plummet with technological growth in the late 1980s and early 1990s, it became easier to provide each employee with a desktop computer, which subsequently ramped up the demands on internal infrastructure: server rooms, storage and computing capacity, IT support staff, telecommunications bandwidth – the list went on.
In contrast, wholesale network infrastructure took a much longer time to catch up. The first transcontinental cable – laid in 1858 – ran from Ireland to Newfoundland, making telegraph communication possible between England and Canada.
Transatlantic telephone cables went into service in 1956, and 32 years later, the first fibre optic cable connected Europe and America. TAT-8, constructed by a consortium of companies (including AT&T, France Telecom and British Telecom), was the 8th transatlantic communications cable and the first fibre-optic cable built, carrying bandwidth of 280mbps between the US, UK and France. Consider that number for one moment, given that an average internet connection for a UK household is already c. 22mbps.
From that encouraging start, global fibre optic infrastructure has been put on in alternating waves of enthusiasm and despair.
The result was that for the most part of the 90s and early 2000s, connectivity and bandwidth remained expensive. Fortunately for all of us, the forecasts in the chart below (published in 2010, interestingly) have more or less come true:
The bad news is that if your business had its heyday in the 90s and early 2000s, the only way you could have secured growth back then was to have invested in a full suite of infrastructure: from data centres to full IT support departments. And these investments would have rightly been made to last, with the aim of securing fixed infrastructure for the many years to come.
“Sharing” was squarely off the table. Unfortunately, as with everything, timing is (almost) everything.
Wrong time, can’t share.
Ask any CTO today if they would invest in setting up their own in-house data centre (or even a “server room”) and you’d probably be laughed out of the room. A quick google search of “CTOs planning in-house networking infrastructure investment” yields results with one common word: outsourcing.
But what happens when you’ve spent hundreds of millions of dollars investing in staff, fixed infrastructure, data centres, server racks, servers and IT contracts? You stick with it and hope that the scale benefits from being a huge organisation still accord you unit costs that are competitive to the “retail” market.
In most cases, that would’ve been the case. The early wave of companies providing shared IT infrastructure (particularly cloud and managed services hosting) like Rackspace didn’t enjoy much initial success, fighting against waves of internal opposition: How reliable would they be? Wouldn’t it be better to keep an eye on our own infrastructure rather than let someone else manage critical operations?
Then came Amazon. Jeff Bezos’ shareholder letters are a massive joy to read (perhaps much more so than Warren Buffett’s), but his 2010 shareholder letter is perhaps one of the classics (emphasis ours):
To our shareowners:
Random forests, naïve Bayesian estimators, RESTful services, gossip protocols, eventual consistency, data sharding, anti-entropy, Byzantine quorum, erasure coding, vector clocks… walk into certain Amazon meetings, and you may momentarily think you’ve stumbled into a computer science lecture.
Look inside a current textbook on software architecture, and you’ll find few patterns that we don’t apply at Amazon. We use high-performance transactions systems, complex rendering and object caching, workflow and queuing systems, business intelligence and data analytics, machine learning and pattern recognition, neural networks and probabilistic decision making, and a wide variety of other techniques. And while many of our systems are based on the latest in computer science research, this often hasn’t been sufficient: our architects and engineers have had to advance research in directions that no academic had yet taken. Many of the problems we face have no textbook solutions, and so we -- happily -- invent new approaches.
Our technologies are almost exclusively implemented as services: bits of logic that encapsulate the data they operate on and provide hardened interfaces as the only way to access their functionality. This approach reduces side effects and allows services to evolve at their own pace without impacting the other components of the overall system. Service-oriented architecture -- or SOA -- is the fundamental building abstraction for Amazon technologies.
By 2010, Amazon Web Services was already 5 years old. But it wasn’t until 2015 that Amazon started to report AWS as a standalone segment in its annual reports – and its maiden segment reporting appearance gave the world a glimpse of how profoundly it had changed the landscape for outsourced IT services: US$7.9bn of annual sales in 2015, at more than 20% operating margins, with that level of profitability achieved since 2013.
The key difference between Amazon and any other cloud services provider was that it was able to provide its services at marginal cost, rather than at an “average” cost, simply because its internal consumption of computing power was more than sufficient to justify the scale of its investments in AWS. Everything else was “a bonus” – Amazon was AWS’ single largest customer, just as it is also its largest customer in logistics, procurement, real estate…
The widespread acceptance of AWS as an outsourced service provider, including by other major internet companies like Netflix and Intuit, as well as local and international government departments, was a major inflection point in the perception of outsourced services in almost every sector, not exclusively in IT.
Sharing was no longer taboo, and “Anything as a Service” was finally back – from Shopify providing hosting, payment and inventory management “as a service”, to the likes of Twilio providing marketing/communications/connectivity “as a service”, to Microsoft providing its office suite “as a service”, to even learning – star teachers and accreditation, education “as a service”. Suddenly, no one needed to “own” anything: everything became a subscription, and it wasn’t just sharing a product itself – the infrastructure itself was being shared, with the users free to build and create as they saw fit, a sort of real-world Minecraft.
Back in the day, we used to…
In almost every tertiary, service-based sector, the trends have been unstoppable: everything is going open-sourced, standards have been thrown open to public participation and scrutiny, and creativity is at an all-time high as the barriers to entry (high fixed costs) have evaporated and made way for the freedom to experiment. No longer are economies of scale the preserve of the biggest players in the industry.
There’s one glaring exception. Our industry.
Hamstrung by fears around regulation and the risk of trying something new (and attracting a regulatory rebuke), the world of Finance and Investment Management looks very much the same as it did in the 1990s. For sure, there have been changes: the days of Gordon Gekko are well and truly over, but at the heart of everything, the “technology” stack at the centre of most investment banks and investment managers remains a jazzed up version of what it was decades ago. After all, there’s very little impetus to change. We have been trained to be risk-averse, a perverse overcompensation for the supposed “riskiness” of our “day jobs”. As we used to be told, “if it ain’t broken, don’t fix it.”
Sometimes, “the way things are done” may be the culmination of years of wisdom into a rule of thumb. Other times, it’s just a thinly veiled excuse to not change. And hence the gravity of inertia: the risk of obsolescence.
For our part, we started Three Body Capital with the full knowledge that our industry faces the risk of a massive disruptive dislocation. We’ve built our business from the ground up, from first principles: sceptically questioning every piece of “convention” we’re encouraged to follow, especially those that exist “because that’s the way it’s always been”. Being able to access the best technology in the world at marginal cost, without any long-term obligations, is a game changer that remains largely untapped by the industry, perhaps not entirely by choice.
We started our business with a blank slate. Some say it is a tough job building something up from scratch; we’d be the first to agree that it’s been an intense path so far. At the same time, a blank slate is the greatest opportunity – and the greatest catalyst – for creativity.
And boy, have we been creative from the moment we sat down to design our very own logo (for the grand cost of zero) to being bold (or insane) enough to go for our performance-fee only fund mandate and piecing together the tech and interface for our friction-free private deals platform. We have spent our careers analyzing companies and stocks for opportunities and we have had the fantastic experience of applying everything we have learnt back into our own various products across our business. In the coming months, we will reveal more product offerings which exploit all of the above.
Sometimes we’re so excited for what we have planned, that we literally struggle to sleep!