Last updated Nov 29, 2025
Chamath @ 00:31:31Inconclusive
aitech
In the coming years (by around 2028), most major AI models and platforms will be open‑sourced or have open equivalents, and this trend will extend to AI hardware, with open reference designs for AI servers/chip systems becoming widely available and used, analogous to Facebook’s Open Compute in Web2.
I think it makes logical sense that you can expect the same things to happen in the AI world. The AI models and the AI platforms and all of that stuff will first get open source because it's a data integrity security issue, and then the hardware will get an open source as well, because you just want simple reference designs you can use and plug and play.View on YouTube
Explanation

It’s too early to decisively judge this prediction, because Chamath’s time horizon was “in the coming years” / “by around 2028,” and we are only at 2025‑11‑30.

Where things stand so far (partial evidence):

  • Open(-weight) frontier models are significant but not dominant. Major open or permissively licensed model families exist (e.g., Meta’s Llama 2/3, Mistral models, various open diffusion/image models), and are widely used in industry and research. However, several of the most influential and capable systems remain closed or tightly controlled API products (e.g., OpenAI’s GPT‑4/4o, Anthropic’s Claude models, Google’s Gemini), indicating that “most major AI models and platforms” are not currently open.
  • Open ecosystems and foundations are growing. Efforts like the Linux Foundation’s AI & Data initiatives, the Open Source Initiative’s definitions around “open” AI, and community‑led model repositories (e.g., Hugging Face) show a strong and accelerating trend toward open or semi‑open models, tooling, and platforms. But this is still in flux and does not yet settle the 2028 end‑state.
  • AI hardware reference designs do exist, but are not yet the dominant paradigm. There are OCP‑style open hardware efforts for compute and AI: the Open Compute Project (originally driven by Facebook/Meta) publishes open server, rack, and data‑center designs and now includes AI/accelerator server specs; several vendors contribute reference designs for GPU and accelerator servers there. Similarly, RISC‑V and some accelerator projects publish open ISA and board/system designs. However, the most widely used AI training hardware stacks (e.g., NVIDIA HGX/DGX systems, certain cloud‑provider‑proprietary AI boxes) are still largely proprietary and vertically integrated, so we cannot yet say that open reference designs are “widely available and used” in the way Web2 commodity servers eventually were.

Why the verdict is inconclusive

  • The prediction is specifically about the situation by ~2028, and we have more than two years to go.
  • Current evidence points in the direction of the trend Chamath described (rapid growth of open models and of OCP‑style AI hardware designs), but the core quantitative claims — most major models/platforms having open equivalents, and open hardware reference designs being widely used and analogous in ubiquity to Web2 Open Compute — cannot yet be confirmed or refuted.

Because the key deadline has not arrived and the landscape is still evolving rapidly, the only defensible status as of 2025‑11‑30 is “inconclusive (too early).”