AI Part 6: Investment - Foundation Models

Foundation model companies such as OpenAI and Anthropic have something of a problem. While they are producing amazing new technologies, and look like a solid investment compared to "AI pins", they possess a number of crucial weaknesses that undermine their attractiveness.
The most important is that they have no "moat", which is to say no built-in protection against competitors. At first glance, the impressive level of technology they are building would seem to insulate them from competitors, but this advantage is somewhat illusory.
Consider SpaceX. Whatever one might think of its founder, it's impossible not to regard their achievements as anything short of revolutionary.
Based on its 2015 reusable-rocket technology, SpaceX lowered the cost of putting objects into space by a factor of 20x, from $54,500/kg to $2,720/kg and is on track to lower it dramatically again to $100-$200/kg. If you want to change the world, make something cheaper, and it rarely gets better than 99.7% cheaper.
A decade on, they have no competitors, even state-level competitors like China or India, and not for lack of demand. The market for launching payloads is robust with a long line of companies, universities and countries looking to expand their operations and reduce their costs. So why are they so singular? Why can no one complete?
It is because what SpaceX is doing is extremely hard. Very few organizations possess the resources, alignment, and culture necessary to build a norm-shattering project on the one hand, but also possess the disciplined risk-adverse engineering capability on the other, to say nothing of a leadership willing to let them do it.
AI, on the other hand, is not so hard as resource-intensive. That does provide a barrier-to-entry for smaller players, but it's not nearly as formidable a challenge as SpaceX and the market is not short of capital looking for yield. That is why there are so many fast-followers in this market like Meta, Mistral, Deepseek, Aliababa, xAI and regional startups like Sakana AI in Japan.
To be clear, moving the state of the art forward with principal innovations is very difficult, but replicating what has been published in academic papers is much less so, and fine-tuning an AI with company data will rapidly become basic tech.
First-Mover Advantage
Looking at OpenAI, the current leader, it's hard not to be reminded of MySpace.
MySpace was the proverbial 800lb gorilla in the nascent world of social networking; first-mover advantage, dominant market share and huge resources. Yet, Facebook came along with a better strategy, much better execution and a sprinkle of innovation to take it all away from them.
Anthropic, the number two in this space, literally exists because of internal division and perceived mismanagement at OpenAI. The list of crucial departures, all around the same time, is eye-opening:
- Ilya Sutskever (co-founder/chief scientist)
- John Schulman (co-founder)
- Mira Murati (CTO)
- Bob McGrew (Head of Research)
- Jan Leike (AI Safety co-lead)
- 25+ other senior staff
In that light, it is hard not to notice the emerging consensus that OpenAI is no longer keeping pace with their rival's models, such as Google's Gemini 3.0 and Anthropic's Claude Opus 4.5. It's not clear whether OpenAI still has the talent necessary to maintain its position, let alone service the astonishing $1.4 trillion in commitments it has made to suppliers.
HSBC estimates that Anthropic has captured the plurality of the enterprise marketshare at 40%. I think this can be attributed to several factors, most notably AI safety and their continued commitment to fundamental research.
AI Safety
OpenAI and xAI have both brushed off AI safety, but even if one is not concerned with movie plot threats such as AI causing the end of humanity, or social justice concerns like inherent bias, most organizations care a great deal about well-behaved agents.
Such organizations are liable for the errors their AI agents make, such as the fake discount an AI chatbot gave to an Air Canada customer. The company's defense that the chatbot is a "separate legal entity that is responsible for its own actions", somehow did not hold up in court.
A member of the tribunal noted that Air Canada is liable for everything on their website, including everything their AI chatbot says; a precedent that should give any well-managed company pause.
Badly behaved LLMs represent a practical risk (damaged reputation, unhappy customers, loss of profit, etc), so it should come as no surprise that most enterprises are focused on managing that risk as a prerequisite to broadly adopting the technology.
It is unlikely that State Farm, a US insurance company, is keen to use xAI's Grok as a customer service agent, given its very public episodes of antisemitic remarks, injecting conspiracy theories into unrelated conversations and calling itself "MechaHitler".
Fundamental Research
Research is inherently unpredictable in terms of time, money and outcomes, and its practitioner's cultural basis works on a long-term time horizon with little regard for bottom-line thinking. Such properties often make it difficult for business leaders to divine its value, particularly if they are working quarter-to-quarter.
Smart companies of a certain scale, such as Apple, Microsoft, Google or NVIDIA, maintain a research division because the fruit of that investment is what will pay the bills in 20 years, whether as new products, licensing or just staying ahead of the competition.
For example, when VR started to draw market interest around 2015, Microsoft was able to get an initial product from their research division who had already been speculatively working on the technology for years.
Conversely, Carly Fiorina during her time as Hewlett-Packard's CEO, directed HP Labs to abandon fundamental research and refocus all their efforts on work that would see bottom-line improvements in 6-18 months.
Aside from the immediate revolt and inevitable collapse of the labs as a premier driver of American innovation, 25 years later HP has devolved into a small collection companies selling mostly unremarkable, commodity products. A long way from the likes of Apple or NVIDIA who have kept up their innovation at a frantic pace, rain or shine.
HP was the company that bought the aforementioned AI pin for $1B. It's a clumsy attempt to be relevant in the current era.
The future's top AI foundation model companies, in the face of capable free and commodity competition will be the ones who can offer capabilities their competitors can't, by doing the difficult, unpredictable, expensive work of moving the state of the art forward.
Companies like Anthropic, Google and, to a lesser extent Alibaba (who make the Qwen model), with their robust focus on research, are best poised to pull away from the pack and maintain their edge.
An edge that OpenAI has, like HP of old, thrown away in a strategic blunder.