he made this huge bet on this 100,000 GPU cluster. People thought, wow, that's a lot. Is it going to bear fruit? Then he said, no, actually, I'm scaling it up to 250,000. Then he said, it's going to scale up to a million.
As of November 30, 2025, xAI’s Colossus supercomputer (used to train Grok) has not yet reached anything close to 1,000,000 GPUs in operation, but there are multiple public plans and ongoing build‑outs aiming for that scale.
-
Current scale (well below 1M GPUs): Recent coverage summarizing Elon Musk’s statements indicates that xAI currently operates on the order of ~200,000–230,000 GPUs (H100/H200 plus roughly 30,000 Blackwell/GB200 units) for Grok training, far below 1,000,000. (techradar.com) Critical analyses likewise argue that, due to power limits at the Memphis site, only a fraction of the installed GPUs can be powered simultaneously, suggesting effective capacity is even lower in practice. (bluemoonofshanghai.com)
-
Documented plans to reach ~1M GPUs: Well before and after the podcast date, Musk and xAI partners publicly described plans to scale Colossus to around or over one million GPUs. Tom’s Hardware, citing the Financial Times, reports that xAI intends to expand Colossus from over 100,000 H100s to more than one million GPUs, making it one of the most powerful supercomputers in the world. (tomshardware.com) Supermicro has also publicly stated it is supporting xAI’s “10‑fold expansion … with over 1 million GPUs” at the Memphis facility. (tomshardware.com) Local economic‑development announcements and follow‑on reporting describe an expansion that will “incorporate a minimum of one million GPUs,” with project completion targeted in 2026. (eonmsk.com)
-
No evidence of cancellation, but also no 1M‑GPU cluster yet: Energy‑infrastructure reports and power‑plant acquisitions explicitly frame the one‑million‑GPU configuration as a future goal, discussing the power needs and construction work required for such a cluster, not an already‑achieved reality. (finance.yahoo.com) None of the major technical or business sources surveyed claim that xAI has actually deployed and brought online ~1,000,000 GPUs as of late 2025.
Because Chamath’s prediction is that xAI’s Grok cluster will be scaled “up to a million” over time, and the available evidence shows:
- substantial progress beyond the initial 100k/250k plans, yet
- the system is still far short of one million GPUs, and
- the publicly stated timeline to reach ~1M extends into 2026 and beyond,
it is too early to say whether the prediction ultimately proves true or false. The outcome depends on whether xAI actually completes the planned scale‑up in the coming years. Therefore, the appropriate classification for now is “inconclusive (too early).”