By the end of the two‑quarter window (roughly through Q4 2023), the first part of Chamath’s scenario did happen, but the core outcome he was betting on did not.
1. Did AMD, Meta, Google, Microsoft, and Amazon announce substantive AI hardware/infra in that window?
Yes:
- AMD detailed its Instinct MI300 series (MI300X/MI300A) as data‑center AI accelerators for generative‑AI training and inference at its June 13, 2023 "Data Center and AI Technology Premiere," and formally launched them at the Dec. 6, 2023 "Advancing AI" event with major partners like Microsoft, Meta, Oracle, Dell, HPE, and others adopting them. (amd.com)
- Meta publicly described its in‑house Meta Training and Inference Accelerator (MTIA) chip family and a next‑gen, AI‑optimized data‑center design in May 2023, explicitly as custom AI accelerators and infrastructure for its workloads. (about.fb.com)
- Google announced Cloud TPU v5e, a purpose‑built AI accelerator for both training and inference, at Google Cloud Next ’23 on Aug. 29, 2023, describing it as its most cost‑efficient, scalable TPU to date, and followed up with the higher‑end TPU v5p in Dec. 2023. (cloud.google.com)
- Microsoft unveiled its own Azure Maia 100 AI accelerator (and Cobalt 100 CPU) at Microsoft Ignite in mid‑November 2023, explicitly aiming to reduce dependence on third‑party GPUs for Azure AI workloads. (arstechnica.com)
- Amazon/AWS used re:Invent 2023 (late Nov. 2023) to announce Trainium2 (second‑gen AI training chip) and Graviton4, positioning Trainium2 as a high‑performance, lower‑cost alternative for training large foundation models and LLMs, and specifically framing these chips as part of a strategy to reduce reliance on Nvidia GPUs. (infoworld.com)
On the narrow question of announcements, his expectation was accurate: all five players did roll out or materially advance substantive AI hardware/infrastructure efforts within ~two quarters of Q2 2023.
2. Did this create “meaningful vendor diversity” so forecasts were no longer effectively 100% Nvidia?
This is where his bet fails. Despite those announcements, Nvidia’s grip on AI data‑center compute remained overwhelming, and market forecasts continued to treat Nvidia as the overwhelmingly dominant provider:
- TechInsights data (summarized in DCD) shows Nvidia had 98% of data‑center GPU shipments in 2023, underscoring that essentially all deployed accelerator GPUs that year were still Nvidia’s. (datacenterdynamics.com)
- SiliconAnalysts likewise estimates Nvidia had about 98% of data‑center GPU revenue in 2023, and even forward‑looking projections still give Nvidia ~87% of AI accelerator revenues in 2027, i.e., forecasts remain overwhelmingly Nvidia‑centric despite AMD, custom TPUs, and cloud chips. (siliconanalysts.com)
- A mid‑2024 CNBC overview notes Nvidia’s AI accelerators still controlled 70–95% of the AI chip market, even as AMD and custom cloud chips were ramping, indicating only modest erosion of Nvidia’s dominance. (cnbc.com)
- Benzinga/TechInsights reporting in 2024 puts Nvidia at 92% of the data‑center GPU market, with Meta simultaneously planning massive deployments of Nvidia H100‑class GPUs, illustrating that even large hyperscalers that are designing custom chips still rely primarily on Nvidia for training capacity. (benzinga.com)
- Analysts and market research in 2024–2025 consistently describe Nvidia as holding roughly 80–90%+ of the AI accelerator/GPU market for data centers, with AMD and custom cloud ASICs making up only a small single‑digit or low‑teens share. (barrons.com)
In other words, the announcements did not, within the predicted timeframe, translate into AI compute forecasts where Nvidia was just one of several roughly comparable vendors. Instead, those forecasts continued to treat Nvidia as the near‑monopoly, with others as emerging but still marginal alternatives.
Since Chamath’s bet was not just that announcements would occur, but that these would prevent Nvidia from “running away with this” and would force forecasts away from being effectively 100% Nvidia‑centric, and the latter clearly did not happen, the overall prediction must be scored as wrong.