Last updated Nov 29, 2025

In conversation with Sam Altman

Fri, 10 May 2024 19:47:00 +0000
Back to episodes
Chamath @ 01:05:46Inconclusive
aitech
OpenAI will become and remain one of roughly four dominant, most important companies in the AI sector over the coming years.
I think that these guys are going to be one of the four major companies that matter in this whole space.
Explanation

As of November 30, 2025, OpenAI clearly is one of the leading AI companies in the world by most conventional metrics (valuation, revenue, user base, and ecosystem impact). Estimates put OpenAI’s annualized revenue around $10B by early 2025, outpacing other model vendors, and analyses of the “largest AI companies in 2025” describe OpenAI as a central, market‑defining player used by billions of users and deeply embedded in enterprise and developer workflows. (epoch.ai)

However, Chamath’s prediction had two parts: (1) that OpenAI would be one of the dominant companies, and (2) that there would be roughly four such major companies that “matter in this whole space” over the coming years. While (1) is broadly consistent with today’s landscape—OpenAI is typically grouped with Alphabet/Google (Gemini/DeepMind), Microsoft (Copilot/Azure AI), Nvidia (AI chips/platforms), Anthropic (Claude), Meta (Llama), Apple, and sometimes Amazon as top‑tier AI players—the market clearly involves more than four major actors today, and competitive dynamics are still rapidly evolving. (linkedin.com)

New, heavily funded entrants (e.g., Jeff Bezos’s Project Prometheus and Mira Murati’s Thinking Machines Lab) are also still emerging and hiring from OpenAI and other leaders, underscoring that the eventual long‑term concentration of power is not yet settled. (en.wikipedia.org) Given that only about 1.5 years have passed since the May 2024 podcast and Chamath’s phrase “over the coming years” implies a multi‑year horizon, it is too early to determine whether OpenAI will remain one of roughly four dominant companies or whether the field will consolidate differently.

So while current evidence is directionally favorable to OpenAI being a top‑tier AI company, the specific prediction about being one of “four major companies that matter” over the coming years cannot yet be definitively judged, hence the result is inconclusive.

Chamath @ 01:05:46Inconclusive
aitecheconomy
Over time, frontier AI foundation models from leading providers will converge to roughly similar, very high capability levels, and most of the economic value and monetization will shift to the surrounding "scaffolding" (infrastructure, tools, and application layers) rather than the core models themselves.
the models will roughly all be the same, but there's going to be a lot of scaffolding around these models that actually allow you to build these apps... I think the right way to think about this now is the models will basically be all really good. And then it's all this other stuff that you'll have to pay for.
Explanation

By late 2025, parts of Chamath’s thesis are partially playing out, but the overall claim is still too early to call.

1. Capability convergence of frontier models
Multiple independent benchmarks now show OpenAI, Anthropic, Google, Meta and leading Chinese/open‑source models clustered at the top with relatively small gaps, and different models winning on different tasks:

  • Academic and industry evaluations of GPT‑4.1 / GPT‑4o, Claude 3/4, and Gemini 1.5/2.5 find that while one model may edge out others on a given benchmark (e.g., calculus problem‑solving, abuse detection), all are “very strong” and broadly competitive, with performance differences often in single‑digit percentage points. (arxiv.org)
  • Community and meta-rankings (e.g., Kagi’s May 2025 LLM ranking) place GPT‑4.x, Claude 4, Gemini 2.x, Llama 3.1/4 and Qwen 3 in the same top tier rather than one clear runaway winner. (reddit.com)
  • Newer releases (e.g., Claude Opus 4.5, Gemini 3 Pro, GPT‑5-class models) keep leapfrogging each other by a few benchmark points, reinforcing the picture of a tightly packed frontier rather than a single dominant, uniquely capable model. (itpro.com)
    This is consistent with Chamath’s qualitative point that “the models will basically be all really good,” though they are not literally identical and still have distinct strengths.

2. Economic value shifting to scaffolding vs. core models
Evidence for the stronger part of the claim—most monetization moving from core models to surrounding infrastructure, tools and applications—is mixed and not yet decisive:

  • Core model providers remain extremely valuable and highly monetized in their own right. OpenAI is estimated at about $13B annualized revenue in 2025, largely from ChatGPT subscriptions and API usage. Anthropic targets roughly $9B by end‑2025 and $20–26B in 2026, with ~80% of revenue from enterprise products built directly on its models. (reuters.com) These are very large, direct model businesses, not yet commoditized utilities.
  • At the same time, there is rapid growth in the “scaffolding” layer:
    • Developer tools like Cursor (Anysphere) have reached ~$500M ARR by 2025, selling an AI coding experience that can swap between multiple backend models. (en.wikipedia.org)
    • GitHub Copilot and Google’s Antigravity IDE act as multi‑model orchestration layers where models (GPT‑4.x, Claude, Gemini) are somewhat interchangeable components inside higher‑value workflows. (en.wikipedia.org)
    • Public markets and investors increasingly highlight enterprise software and infrastructure companies (Salesforce, ServiceNow, Snowflake, Microsoft, Meta) as the primary long‑term AI winners, where models are embedded as features inside much larger platforms. (kiplinger.com)
      This supports the direction of Chamath’s view—that a lot of value is accruing to application, tooling, and infrastructure layers—but we do not yet have clear quantitative evidence that most of the economic value has shifted away from model providers themselves. Frontier model training and API provision remain capital‑intensive, heavily moated, and highly profitable businesses.

3. Time horizon
The prediction was explicitly framed as an “over time” structural shift. As of November 30, 2025—about 18 months after the podcast—frontier models appear to be converging in capability, and scaffolding layers are clearly emerging as powerful, high‑revenue businesses. However, core model vendors still capture a substantial share of generative‑AI‑specific revenue, and we lack solid data showing a durable, majority share of economic value has moved to scaffolding.

Because the long‑term competitive and profit dynamics of models vs. scaffolding are still unfolding and the claim is inherently multi‑year, the fairest assessment today is “inconclusive (too early)” rather than clearly right or wrong.

aitechventure
Companies that successfully build the non-model "scaffolding" (infrastructure, tools, and application layer) around large language models will be able to build very successful, large businesses in the AI era.
Whoever builds all this other stuff is going to be in a position to build a really good business.
Explanation

By late 2025, multiple companies focused on “scaffolding” around large language models—rather than on training base models themselves—have in fact become very large, commercially successful businesses, matching Chamath’s thesis.

Infrastructure / tooling layer

  • Scale AI provides data-labeling, RLHF, and deployment tooling used by major foundation model labs. Meta’s 2025 deal valued Scale at about $29B, with expectations of $2B in 2025 revenue, making it a large and clearly successful infrastructure business built around (not as) LLMs. (ft.com)
  • Pinecone, a vector database that serves as long‑term memory for LLM apps, raised a $100M Series B at a $750M valuation and is described as a “critical component” of the generative‑AI stack. (pinecone.io)
  • Weights & Biases, a widely used AI developer platform for training and monitoring models, was acquired by Nvidia‑backed cloud provider CoreWeave in a deal reported around $1.7B, to deepen CoreWeave’s AI developer tooling ahead of an IPO targeted at $35B+ valuation. (reuters.com)
  • LangChain, an open‑source framework for building AI agents and applications, raised $125M at a $1.25B valuation in 2025, reflecting substantial commercial value from pure “agentic” scaffolding software. (techcrunch.com)

Application / developer-product layer built on top of models

  • Anysphere (Cursor), an AI-assisted IDE built on top of foundation models rather than training its own, reached about $500M in ARR and a $9.9B valuation by mid‑2025. (en.wikipedia.org)
  • Cognition AI (Devin), an autonomous coding agent relying on underlying model infrastructure, hit a $10.2B valuation with ARR growing from $1M (Sep 2024) to $73M (Jun 2025). (techcrunch.com)
  • Lovable, an AI software‑engineering platform that builds apps from natural‑language prompts using existing models, surpassed $100M ARR just eight months after launch and raised $200M at a $1.8B valuation in July 2025. (en.wikipedia.org)
  • Perplexity AI, an LLM‑powered search and browser company, relies on models from others but provides the surrounding product and UX. By 2025 it was valued around $18–20B, with ARR approaching $200M, and signed a $400M search integration deal with Snap, underscoring the scale possible at the app layer. (cnbc.com)

Infrastructure around AI compute

  • Crusoe, which builds AI‑specific data centers (compute “plumbing” for model and app providers), raised $1.4B at a valuation above $10B and is constructing multi‑gigawatt campuses for OpenAI and others—another example of non‑model infrastructure becoming a very large business. (ft.com)

Across these examples, companies that supply the non‑model scaffolding—vector databases, data/ML tooling, agent frameworks, AI‑enhanced IDEs, and AI data‑center infrastructure—have already achieved multi‑billion‑dollar valuations and, in many cases, hundreds of millions to billions in annual revenue. That is exactly the kind of “really good business” Chamath described. While the long‑term landscape can still change, the evidence by November 2025 shows that his prediction has, in substance, come true.

markets
Apple will primarily use its very large excess cash to repurchase its own stock rather than to make large new investments or acquisitions, continuing a heavy buyback strategy in the coming years.
It's an incredible business that has so much money with nothing to do. They're probably just going to buy back the stock. Just a total waste.
Explanation

Evidence since May 2024 shows Apple has indeed continued to use the bulk of its excess cash for large-scale share repurchases, while avoiding major acquisitions and only modestly increasing capital investment.

Key points:

  • Massive, continuing buyback authorizations. In May 2024 Apple’s board approved a record $110 billion repurchase authorization, and in May 2025 it added a further $100 billion program, the two largest buyback authorizations in U.S. history and clear signals that buybacks remain central to capital allocation. (macmegasite.com)
  • Actual repurchases dominate cash use. Apple repurchased about $70 billion of stock in the first nine months of fiscal 2024 and roughly $104.2 billion for full-year 2024. (sec.gov) In the first half of 2025 alone, Apple bought back another $48.3 billion of stock ($23.3B in Q1, $25.0B in Q2), and data on net equity repurchased show roughly $90–95 billion of buybacks for 2024 and a similar magnitude in 2025. (sec.gov) These amounts are on the same order as Apple’s annual free cash flow (about $109 billion in FY 2024), meaning most free cash is being returned to shareholders rather than redirected into new ventures. (bullfincher.io)
  • Capex and other investments are much smaller. Apple’s capital expenditures were about $9.45 billion in FY 2024 and ~$12.7 billion in FY 2025—an increase, but still less than one‑seventh of what the company spends on buybacks in a typical recent year. (financecharts.com) Even significant real-estate purchases (around $500 million in Bay Area properties in one week of June 2025) are small relative to annual repurchase totals. (sfchronicle.com)
  • No shift to large transformative acquisitions. Apple’s 2024–2025 deals remain small, such as buying AI-focused startups like DarwinAI, Mayday Labs, the Pixelmator Team, and game studio RAC7; its largest acquisition is still the $3 billion Beats deal from 2014. (en.wikipedia.org) Tim Cook reiterated in July 2025 that while Apple is “very open” to AI-related M&A of any size, the companies acquired so far in 2025 are “small in nature,” underscoring the absence of large-scale takeovers. (cnbc.com)
  • Even big new initiatives haven’t eclipsed buybacks. In February 2025 Apple announced a headline-grabbing plan to spend more than $500 billion in the U.S. over four years, including a new AI server factory in Texas, doubling its U.S. Advanced Manufacturing Fund, and various R&D and supplier commitments. (apple.com) However, this figure bundles operating spending, supplier contracts, content production, and pre-existing investment patterns; it has not coincided with a reduction in the scale of Apple’s repurchase programs. A contemporaneous analysis notes that Apple still leads the S&P 500 in cumulative buybacks (~$709 billion total, ~$90.7 billion in the most recent year) and is criticized for prioritizing shareholder returns over aggressive AI capex, confirming that its capital-return bias remains intact. (marketwatch.com)

Taken together, the post–May 2024 record shows Apple continuing extremely large stock buybacks that absorb most of its excess cash, while capex and acquisitions, though present and growing in AI and manufacturing, remain comparatively modest. That matches Chamath’s prediction that Apple would primarily use its large cash pile to repurchase its own stock rather than to pursue major new investments or acquisitions, at least over the ~18 months since the podcast aired.

aihealth
AlphaFold 3–style AI systems will usher in a new era of drug development and biological design, leading to profound, large-scale impacts on human health and the biopharma industry over the next couple of decades.
it's going to usher in a new era of drug development, design for human health. So all in all, I'd say it's a pretty astounding day... the capabilities and the impact are going to be like, I don't know, I know I say this sort of stuff a lot, but it's going to be pretty profound.
Explanation

The prediction is framed on a multi-decade horizon ("a new era" with "pretty profound" impacts on drug development and human health), but we are only about 1.5 years past the May 2024 release of AlphaFold 3, far too early to evaluate long‑run, large‑scale health and industry impacts.

Evidence so far is directionally supportive but preliminary:

  • AlphaFold‑based methods are being actively refined for drug design; for example, the AF2RAVE approach explicitly combines AlphaFold with physics-based simulations to improve drug-relevant protein conformations, with researchers describing this as enabling faster drug discovery and potentially personalized medicine, but they themselves characterize this as an early-stage step toward that vision. (cmns.umd.edu)
  • AI-driven drug discovery companies (including those building on AlphaFold-style models) have AI‑designed drug candidates entering or approaching human clinical trials, which is a major milestone but still only the beginning of the clinical pipeline. (aitechsuite.com)
  • Market analyses project rapid growth of AI in drug discovery through 2030 and describe it as transformational for timelines and costs, indicating strong industry uptake but not yet demonstrating realized, population-scale health outcomes. (stocks.observer-reporter.com)

However, demonstrating “profound, large-scale impacts on human health and the biopharma industry” requires:

  • Successful completion of Phase II/III trials for multiple AI-designed drugs
  • Regulatory approvals and widespread clinical adoption
  • Clear, measurable improvements in morbidity, mortality, or cost at scale across many indications

Those outcomes typically take 10–15 years or more from early discovery to broad clinical impact, and as of late 2025 the field is still in the early validation phase. Current data show promise but cannot yet confirm whether the long-run, transformative impact Friedberg predicted will actually materialize.

Because the forecast explicitly concerns the next couple of decades and we are only a short way into that period, with crucial clinical and economic results still pending, the prediction cannot be judged as right or wrong at this time.

aihealthscience
In the future, AI systems like AlphaFold 3 will enable in-silico search over chemical space to design combinations of molecules that can safely reprogram and "de-age" human cells (Yamanaka-factor-style approaches), leading to powerful longevity or rejuvenation therapies.
We can now simulate that. So with this system, one of the things that this AlphaFold three can do is predict what molecules will bind and promote certain sequences of DNA, which is exactly what we try and do with the Yamanaka factor based expression systems and find ones that won't trigger off target expression. So meaning we can now go through the search space in software of creating a combination of molecules that theoretically could unlock this fountain of youth to de-age all the cells in the body and introduce an extraordinary kind of health benefit.
Explanation

As of November 30, 2025, this prediction is about a long‑term future capability and clinical impact, and there is not yet enough time or evidence to say it is clearly right or clearly wrong.

What has happened so far

  1. AlphaFold 3 and related models can model complex interactions, but mainly in silico and preclinical contexts.

    • DeepMind/Isomorphic Labs’ AlphaFold 3 (2024) extends structure prediction to protein–protein, protein–DNA/RNA, and protein–ligand complexes, improving the ability to model binding interactions across chemical space.
    • However, these are prediction tools, not validated end‑to‑end drug‑discovery pipelines that guarantee safe, specific reprogramming of human cells.
  2. In‑silico search over chemical and biological space is real and rapidly improving.

    • Many groups and companies now use generative and predictive models (including structure predictors like AlphaFold, diffusion models, and reinforcement learning) to design small molecules, biologics, and RNA constructs. Some are applied to aging-related pathways or senolytics in preclinical work.
    • These tools support exploration of candidates but do not yet reliably deliver clinically proven longevity or rejuvenation therapies.
  3. Yamanaka‑factor / partial reprogramming for rejuvenation is still preclinical.

    • There is active research using OSKM and related factors for cellular rejuvenation and partial reprogramming in mice and cell culture, and several startups (e.g., Altos Labs, NewLimit, etc.) are pursuing this area.
    • As of 2025, there is no approved human therapy that safely “de‑ages all the cells in the body” using Yamanaka‑style reprogramming, and no widely accepted clinical protocol derived from an AlphaFold‑like in‑silico combinatorial search.

Why the prediction is “inconclusive” rather than wrong

  • The claim is essentially that in the future, systems like AlphaFold 3 will let us computationally search over combinations of molecules that can safely reprogram cells and unlock powerful rejuvenation/longevity therapies.
  • This is a directional, long‑horizon forecast about what will be possible, not about something that should already have occurred by 2025.
  • Current evidence shows early enabling steps (powerful structure prediction, AI‑assisted design, active partial‑reprogramming research), but not the end state (validated, safe, clinically deployed rejuvenation therapies discovered this way).

Because the timeframe is open‑ended and the core claim concerns capabilities and therapies that plausibly lie decades away, it is too early to judge it as right or wrong. Hence the status is “inconclusive (too early)”.