Weekend Reading #363
This is the three-hundredth-and-sixty-third weekly edition of our newsletter, Weekend Reading, sent out on Saturday 9th May 2026.
To receive a copy each week directly into your inbox, sign up here.
*****
What we're thinking.
Not much has changed this week as markets continue to spectacularly rumble on. With more and more signs that the war is nearing its close (in reality or even if just declared so), the everything rally is back. It's been a while since we've had everything go up but we appear to be there in that sweet spot right now. Are we at the cusp of the next commodity bull market? Is the US Dollar poised to resume its longer-term downtrend? These are critical questions. For us the price action is suggestive of this but not yet crystal clear. There are lots of places to make money at the moment as we simultaneously bear in mind that on the other side of all this fun something dark lurks. Yet still the price action in the AI related names is simply breathtaking as after being usurped briefly earlier in the week by Google, Nvidia has reclaimed its place back at the top. It's all rather fun right? Right?
What we're reading.
I have written at length in this newsletter about the development and capabilities of the Turkish defense industry. This week Turkey announced the surprise that they have developed an inter-continental ballistic missile with a range of up to 6000km. This development is very significant as Turkey joins a short list of countries with such (declared) capabilities. They are the USA, Israel, China, Russia, France, the UK, India and North Korea. It also states quite clearly that Turkey is at the top table now in terms of defensive capability. For all the critics of Erdogan, he has presided quite deliberately over an extraordinary period of defense development. One can only hope that he doesn't plan on using his new toys. But that wouldn't be likely would it.
This is an excellent piece on how AI is coming for our minds and those of our children (but not in the way you think). It is written by an investment manager, Tom Slater, at Baillie Gifford. The irony is that he clearly used AI to help write it! Nevertheless, it has many great nuggets. The first is a link to a short paper written by A Harvard Art History professor in 2013 entitled "The Power of Patience". In it the professor, Jennifer Roberts, tells of how she believes her students are in a rush and that the world is incentivizing them to rush even more before they are able to absorb necessary inputs to make a good decision. This was in 2013! Her antidote? As part of her course, students are required to write a paper of a piece of art of their choosing. The first thing she requires them to do is to go to the museum or gallery where the painting or piece is situated and sit for 3 hours minimum and stare at it. As the students sit and are forced to observe and be patient, more and more details emerge from the painting, giving deeper and deeper insights into the work of art. Now can you imagine this exercise today, 13 years (and multiple iterations of social media and short form video) later!
Dean Radin is the chief scientist at the Institute of Noetic Science and author of this book I read in a few days this week called Real Magic. This is the latest in his bibliography of books seeking to understand the more mysterious realms of our minds and abilities and indeed a further investigation into one of my pet topics, the nature of consciousness and all associated with it. In this book he goes into a history of scientific studies to investigate whether some of these phenomena (precognition, remote viewing, telepathy, healing etc) are real. His conclusion is that 40 years of studies and metastudies show more than enough evidence of this. But the concept that really got me thinking is his comment that hundreds of years ago we believed many things to be magic because we didn't have a scientific explanation of how they worked. We just didn't have the knowledge. The science hadn't been discovered yet. How many things today is the science establishment shooting down because we can't explain it? Properly set up studies show that these phenomena do exist, but we just haven't worked out how yet. We suspect it has to do with quantum physics and the concept of physical entanglement and maybe in decades to come (even with the help of AI) we will look back on the current primitive version of ourselves with amusement.
On another note, today as I write, saw the release of the first batch of UAP files from the US government. I'm sure over the next week somebody will unpack the highlights for me as I have no intention of digging into the massive file release! DC
One of our big misgivings about relying on LLMs is that while they’re really good for big-picture concepts and summaries of reasonable sized context (e.g. summarise 5 pages of prose into a paragraph of 500 words etc), they are extremely unreliable when it comes to critical detail – in that where precision is required, it’s VERY hard to say for sure that what you’re told by your LLM of choice is accurate. Which means when it comes to life and death situations, the amount of double-checking of LLM output that’s needed is probably high enough to warrant an actual professional human. But how bad are these hallucinations aka how much of what an LLM tells you is “just made up”? It turns out according to this paper which introduces a benchmark called “HalluHard”, measuring hallucinations across four high-stakes domains (legal cases, research questions, medical guidelines and coding), running across multi-turn, citation-required conversations, it’s bad – even the “best” configuration of Opus 4.5 with search turns up about 30% of hallucinated claims. Let that sink in for a second – even with web search, a third of what you’re told is made up. Worse, better reasoning doesn’t eliminate hallucinations in multi-turn, citation-heavy tasks either i.e. building a better model doesn’t help make it more accurate. Rather, it’s model capacity (i.e. how many parameters it has) and position in the conversation (i.e. how deep the conversation goes, and how early in a conversation a hallucinated claim occurs) that determines how bad the errors become. Put differently, high-volume and low-accuracy work – no issues. Low-volume and high-accuracy required – reliability becomes a big issue. Seems we have quite some way more to go. EL
What we're watching.
Another nugget from Slater's piece above was a link to a segment of a 2024 conversation between neuroscientist podcaster, Andrew Huberman and elite endurance athlete and popular influencer, David Goggins. In it, Huberman discusses his learnings from research into a part of the human brain called the anterior mid cingulate cortex (AMCC). Apparently, this part of the brain is directly linked to someone's ability to do thinks they don't want to do. Exercising a lot when they don't feel like it. Studying something boring. Etc. And the research shows that this part of the brain grows the more someone does these types of things. And this part of the brain is huge in athletes for example. It is also linked to living longer. And like exercise, the moment one stops doing hard things, it stops growing and indeed it can shrink again. There are all sorts of implications from this, especially for those getting older. First thing for me that pops into my mind is the concept of retirement. Studies have shown that retirement for men especially can be a death knell. Now maybe we know why. Stop striving, stop pushing, stop breaking boundaries and literally your brain shrinks (well that may be hyperbole, but you get the drift).
Another highlight this week was Chamath Palihapitiya's conversation with Joe Rogan. Whatever one may think of Chamath and the Golden Age of Grift (kudos to The Shrub for this phrase) he is an extremely bright mind. His observations on AI, on politics and on most things discussed are sharp and relevant and at nearly three hours it was time well spent. One specific highlight was Chamath's reference to a study performed by a chap named Curt Richter, a professor at John Hopkins in the 50s. The study involved rats (naturally). A rat was dropped into a bowl of water, and the researchers observed how long the rats survived until they drowned. The answer on average was 4 to 4 ½ minutes before they gave up trying to swim and drowned. Then they did it again (new rats I presume) and after around 3 ½ minutes of struggling, the researchers rescued the rat, dried it, fed it etc and then put the rat back in the water. After this intervention the rats in this group lasted 60 to 80 HOURS before they drowned. It turns out that just believing they would be rescued allowed them to survive way beyond their initially limited capabilities. And who says hope is not a strategy!
And finally in the watching section, this conversation with a guy called Michael Oliver by Adam Taggart was worth watching. He is a technical analyst but uses momentum as his indicator rather than price. According to him we have rolled over on momentum even though price is making new highs. He advocates for precious metals as the place to be as he expects this to be the last hurrah in equities and still the early stages of a commodity bull market. At the risk of being flippant, it sure seems like we have momentum in the market as we bubble upwards. But then again, it’s worth bearing in mind arguments such as these as we all know how quickly things can change. It's all very well riding a bubble up and I really do enjoy it but it's important to remember to sell before it comes down. DC