We’re at a unique moment for AI companies to build their own foundational models.
First, there’s a whole generation of industry veterans who made their names at major tech companies and now go solo. You also have legendary researchers with great experience but vague commercial aspirations. There is a clear possibility that at least some of these new labs will become OpenAI-sized giants, but there is also room to do interesting research without worrying about commercialization.
The end result? So it’s hard to know who’s really making money.
To make it easier, I propose a sliding scale for companies that make basic models. It’s a five-level scale that doesn’t matter if you actually make money – only if you try to. The idea here is to measure ambition, not success.
Think of it in these terms:
- Level 5: Every day we have earned millions of dollars, thank you very much.
- Level 4: We have a detailed multi-stage plan to become the richest man on Earth.
- Level 3: We have many promising product ideas, which will be revealed in the near future.
- Level 2: We have a draft outline of the plan.
- Level 1: True wealth is when you love yourself.
The big names are all at Level 5: OpenAI, Anthropic, Gemini, and more. The scale becomes more interesting with the new generation of laboratories launched today, with big dreams but ambitions that can be harder to read.
Importantly, people participating in these labs can generally choose whatever level they want. There is so much money in AI these days that no one is going to interrogate them for a business plan. Even if the lab is just a research project, the investors will consider themselves happy to be involved. If you’re not motivated to become a billionaire, you’ll probably live a happier life on Level 2 than on Level 5.
Techcrunch event
San Francisco
|
13-15 October 2026
The problem arises because it’s not always clear where AI labs are on the scale — and much of the current AI industry drama stems from that confusion. Much of the anxiety about OpenAI’s conversion from non-profits comes as the lab spent years at Level 1, then jumped to Level 5 almost overnight. On the other hand, you might argue that Meta’s early AI research is already at Level 2, when what the company wants is Level 4.
With that in mind, here’s a quick overview of the four largest contemporary AI labs, and how they measure up.
human&
Man & there big AI news this weekand part of the inspiration to come up with this whole scale. The founders have an interesting tone for the next generation AI model, with the law of scale giving way to emphasizing communication and coordination tools.
But for all the glowing press, Manungsa& has been coy about how that will translate into real monetizable products. It seems exercise want to build a product; the team just won’t commit to anything specific. The most they say is that they will build several types of AI workplace tools, replacing products like Slack, Jira and Google Docs but also redefining how these other tools work at a basic level. Workplace software for the post-software workplace!
It’s my job to figure out what this stuff means, and I’m still confused about that last part. But it’s just specific enough that I think we can put it in Level 3.
Thinking Machine Lab
This is so hard to rate! In general, if you have a former CTO and project lead for ChatGPT raising a seed round of $ 2 billion, you have to assume that there is a pretty specific roadmap. Mira Murati doesn’t strike me as someone who jumps in without a plan, so by 2026, I’ll feel good about TML at Level 4.
But then two weeks ago it happened. The departure of CTO and co-founder Barret Zoph has garnered a lot of headlines, in part because of that special circumstances join. But at least five other employees left with Zoph, many of whom expressed concern about the company’s direction. In just one year, nearly half of the executives on TML’s founding team no longer work there. One way to read events is that they thought they had a solid plan to become a world-class AI laboratory, only to discover that the plan was not what they thought it was. Or in terms of scale, they want a Level 4 lab but know they are at Level 2 or 3.
Still not enough evidence to justify a downgrade, but close.
World Labs
Fei-Fei Li is one of the most respected names in AI research, best known for creating the ImageNet challenge that pioneered contemporary deep learning techniques. He currently holds a Sequoia endowed chair at Stanford, where he directs two different AI labs. I will not bore you through all the honors and different positions of the academy, but suffice it to say that if he wanted, he could spend the rest of his life just receiving awards and being told how great he is. The book it is also quite good!
Become in 2024when Li announced he had raised $230 million for a spatial AI company called World Labs, you might think we’re operating at Level 2 or lower.
But that was over a year ago, which is a long time in the AI world. Since then, World Labs has delivered two full world-yielding model and commercial products built on top of that. At the same time, we have seen real signs of worldwide demand for modeling from the video game and special effects industry – and no major labs have created anything that can compete. The results look very much like a Level 4 company, perhaps graduating to Level 5 soon.
Safe Superintelligence (SSI)
Founded by former OpenAI chief scientist Ilya Sutskever, Safe Superintelligence (or SSI) looks like a classic example of a Level 1 startup. Sutskever has been big on keeping SSI insulated from commercial pressure, to a point rejected the acquisition attempt of Meta. There is no product cycle and, apart from the superintelligent foundation model that is still baking, there doesn’t seem to be any product. With this pitch, he raised $3 billion! Sutskever has always been more interested in the science of AI than business, and every indication that this project is genuinely scientific at heart.
That said, the world of AI is moving fast – and it would be foolish to count SSI out of the commercial realm entirely. On Dwarkesh’s new appearanceSutskever gave two reasons why SSI could be a pivot, which is “if the timeline becomes long, which is possible” or because “there is a lot of value in the best and most powerful AI that has an impact on the world.” In other words, if the research turns out to be very good or very bad, we could see the SSI jump up a few levels in a hurry.

