Last updated Nov 29, 2025
aitech
Within the next several years, foundational OpenAI-like capabilities (large language models, etc.) will become widely available from multiple vendors and will be at least partially commoditized, reducing durable advantage from the core models themselves.
you said that these open AI capabilities are eventually going to become commoditized or certainly much more widely available... there'll be multiple players that offer them.View on YouTube
Explanation

As of late 2025, the prediction that OpenAI‑like foundational capabilities would be widely available from multiple vendors and at least partially commoditized is essentially correct.

1. Multiple vendors with broadly similar foundational capabilities

Since early 2023, many large‑scale, general‑purpose LLMs comparable in kind to OpenAI’s GPT family have been released and productized by different organizations:

  • Anthropic: Claude 3 (Opus, Sonnet, Haiku) and later Claude 3.5 models provide GPT‑4‑class reasoning and are widely available via API and in third‑party products.
  • Google: Gemini (Ultra, Pro, Flash) and its successors are positioned as general‑purpose foundation models integrated across Google Cloud and consumer products (Search, Workspace, Android, etc.).
  • Meta: LLaMA 2 and LLaMA 3 are open‑weight models explicitly released as general foundation models for commercial and research use, widely adopted and fine‑tuned by many companies.
  • Mistral and other startups: Mistral’s Mixtral and later models, plus many other open and closed LLMs, provide competitive capabilities and are accessible via standard APIs.

The net effect is that core LLM capabilities (chat, coding assistance, summarization, Q&A, etc.) are no longer exclusive to a single vendor; they are available from several big tech companies and multiple startups, plus open‑weight models that anyone can deploy.

2. Evidence of partial commoditization

“Commoditized” in the context of the quote means: the base capability (a strong general‑purpose LLM) is available from many suppliers with decreasing differentiation and with competition driven partly on price, latency, and deployment form factor rather than unique raw capability.

By 2024–2025, we see features of exactly this:

  • Price competition and similar SKUs: Major providers all expose comparable tiers (fast/cheap vs. large/expensive models) and frequently cut prices or introduce cheaper distilled models. This is classic commoditization behavior.
  • Standardized interfaces: Many providers follow very similar JSON/chat/completions APIs, and tools like LangChain, LlamaIndex, and various “multi‑provider” SDKs treat models from OpenAI, Anthropic, Google, and open‑weights as largely swappable components.
  • Open‑weight alternatives: LLaMA 2/3, Mistral, and other open‑weight models allow enterprises to self‑host competitive models on their own infrastructure or edge devices, further eroding any exclusive moat around simply having a capable base LLM.
  • Differentiation shifting up the stack: Most competitive AI products now differentiate with fine‑tuning, retrieval‑augmented generation, proprietary data, UX, integration, and domain‑specific tooling, not just which base model they use. This is strong evidence that the foundational model itself is treated more like a commodity input in many contexts.

At the same time, it is not fully commoditized in a strict economic sense—frontier models (e.g., the very top‑end proprietary models) are still concentrated in a few labs and can have meaningful capability gaps. But the prediction only claimed “eventually going to become commoditized or certainly much more widely available… there’ll be multiple players that offer them” and that the durable advantage from the core models themselves would be reduced, not eliminated.

Given:

  • The existence of many broadly similar LLM providers.
  • Active price and performance competition.
  • Open‑weight models narrowing the gap and enabling self‑hosting.
  • Strategic focus shifting away from raw model access toward data, integration, and product.

…it is reasonable to conclude that the prediction has effectively come true in the “partially commoditized, widely available from multiple players” sense that Sacks described.