Last updated Nov 29, 2025
aitech
OpenAI’s custom GPT platform and associated developer network effects will create a self-reinforcing ecosystem around ChatGPT that open-source model ecosystems will find difficult to match, leading to a durable advantage for OpenAI in capabilities and available applications.
So you have a classic developer network effect where you've got OpenAI aggregating hundreds of millions of consumers because they perceive that ChatGPT is the best. Then you've got developers wanting to reach that audience. So they build custom gpts on the OpenAI platform. That actually gives ChatGPT more capability. Yeah. And that's something that open source can't easily catch up with.View on YouTube
Explanation

The prediction was that OpenAI’s custom GPT platform and its developer network effects would create a self‑reinforcing ecosystem that open‑source models would find hard to match, giving OpenAI a durable advantage in capabilities and applications.

1. OpenAI did build a large custom‑GPT ecosystem
OpenAI’s GPT Store launched in January 2024; by early 2024 more than 3 million custom GPTs had been created, with on the order of ~150–160k public GPTs in the store and hundreds to thousands of new GPTs added daily. Academic work (GPTZoo) cataloged over 700k GPT instances shortly after launch, indicating substantial real‑world experimentation and usage around the custom‑GPT platform. (openai.com)
This supports the first part of the claim: OpenAI did generate a sizable developer ecosystem and a broad catalog of applications on top of ChatGPT.

2. But open‑source ecosystems have grown larger and highly competitive
By late 2024, Hugging Face already hosted over 1 million AI models; by August 2025, an empirical study found ~1.86 million models on the platform, documenting extensive “family trees” of fine‑tuned derivatives. (arstechnica.com)
Hugging Face reports massive enterprise usage and positions itself explicitly as a way for businesses to rapidly find and deploy open‑source models, showing that open ecosystems are heavily used in production, not just research. (nutanix.com)
A joint MIT–Hugging Face study cited by the Financial Times found that by 2025 China alone accounted for a larger share of global open‑model downloads than the U.S., underscoring that open‑weight ecosystems are vibrant, global, and scaling quickly. (ft.com)

3. Open models now match or surpass OpenAI on key capabilities
Multiple open or open‑weight models rival or beat OpenAI’s frontier models on important benchmarks:

  • DeepSeek‑R1, an open‑weight reasoning model, scores higher than GPT‑4o on several standardized tests: it achieves ~90.8% on MMLU and 97.3% on the MATH‑500 benchmark, beating GPT‑4o’s 88.7% MMLU and 76.6% MATH scores. (edenai.co)
  • A July 2025 survey of live benchmarks ranks DeepSeek‑R1 as the top open‑source model and 4th overall on Chatbot Arena, with extremely strong reasoning and coding metrics (e.g., ~99% on MATH‑500 and >90% on AIME‑style tasks), explicitly noting that it matches or outperforms OpenAI’s o‑series on several reasoning benchmarks while being far cheaper. (champaignmagazine.com)
  • Meta’s Llama 4 Maverick is reported to offer performance comparable to GPT‑4o on coding and reasoning, and Meta is rolling out Llama across major consumer surfaces like WhatsApp, Messenger and Instagram—making these open models widely accessible to end‑users and developers. (theverge.com)
  • Alibaba’s Qwen3 family (Apache‑2.0 open‑license) and related QwQ / Qwen2.5 models are released as open weights; Alibaba claims Qwen2.5‑Max outperforms GPT‑4o and leading open competitors on multiple benchmarks, and Qwen3 models provide reasoning and multimodal capabilities similar in spirit to OpenAI’s o‑series and GPT‑4o, with downloadable weights for developers. (en.wikipedia.org)
  • Open‑source reasoning models like AM‑Thinking‑v1, built on open Qwen2.5‑32B, report surpassing DeepSeek‑R1 itself on AIME and LiveCodeBench, further pushing the open frontier. (arxiv.org)

Collectively, these results show that by mid‑to‑late 2025, open‑weight models are not lagging far behind ChatGPT; in several high‑value domains (math, coding, reasoning) they match or exceed OpenAI’s best generally available models.

4. Open ecosystems for applications rival custom GPTs
Open‑source platforms provide their own powerful “developer network effects”:

  • Hugging Face’s Hub and Spaces support millions of models and thousands of hosted apps, with tooling (Transformers, Text‑Generation‑Inference, Spaces, etc.) designed to make deploying and sharing LLM‑based applications as easy as using a closed API. (arstechnica.com)
  • Large ecosystems have formed around open models and agent frameworks (e.g., DeepSeek R1, Llama, Qwen) for fine‑tuned assistants, local deployments, and enterprise‑specific agents, often favored because they avoid vendor lock‑in and reduce inference cost—key considerations for businesses choosing platforms. (leanware.co)

Meanwhile, although OpenAI’s GPT Store has hundreds of thousands of GPTs and significant usage, its growth plateaued relatively quickly and it remains tied to ChatGPT’s paid tiers and OpenAI’s policies, limiting its reach compared with globally downloadable open‑weight models that can be deployed anywhere. (originality.ai)

5. Net assessment vs. the original claim
Two years after the prediction (Feb 2024 → late 2025):

  • Yes, OpenAI retains one of the strongest proprietary consumer platforms, and custom GPTs meaningfully extend ChatGPT’s capabilities.
  • But open‑source ecosystems have not been left behind. On benchmarks and in real‑world deployments, open‑weight models like DeepSeek‑R1, Llama 4, Qwen3 and successors have caught up to or surpassed OpenAI’s widely available models in several capability areas, while the open model/application ecosystem (Hugging Face and others) is at least comparable in scale and diversity to the GPT Store.

Because open‑source ecosystems have matched and, in some niches, exceeded OpenAI’s capabilities and application diversity—contradicting the idea that they would find it “difficult to catch up” and that OpenAI would enjoy a clear, durable moat from custom GPT network effects—the prediction is best judged as wrong.