I think we're not going to slow down. I actually think it's going the other way. I think things are going to speed up.View on YouTube
Evidence since March 31, 2023 indicates that AI development and deployment have accelerated, not slowed, despite frequent calls for pauses, regulation, or moratoria.
Key observations (all relative to the 2022–early 2023 baseline when ChatGPT first appeared):
-
Major model releases accelerated in cadence and scale
- OpenAI moved from GPT‑4 (March 2023) to large multi‑modal and tool‑using capabilities (Vision, Code Interpreter, function calling) and then to GPT‑4.1/4o‑class models and strong edge/mobile integrations, with steadily improving cost, speed, and quality.
- Anthropic progressed rapidly from early Claude versions to Claude 2, 2.1, and 3‑series models (Opus, Sonnet, Haiku), with each generation showing substantial capability gains and heavier enterprise adoption.
- Google advanced from PaLM/LaMDA era systems to PaLM 2, Gemini‑class (multi‑modal) models, and tight integration across Search, Workspace, Android, etc.
- Meta went from research‑only large language models to open‑weight LLaMA 1, then LLaMA 2 and 3, markedly increasing model quality while also catalyzing a large open‑source ecosystem.
The frequency and magnitude of major model releases and capability jumps since early 2023 is notably higher than in 2020–2022, and the models are deployed into many more products and workflows than before.
-
Broad deployment into consumer and enterprise products
- General‑purpose AI assistants (e.g., integrated chatbots and copilots) are now embedded in operating systems, search engines, productivity suites, developer tools, CRM/ERP platforms, and design tools. This includes system‑level or first‑party “copilots” from multiple big tech companies and widespread third‑party integrations.
- Enterprise adoption has expanded rapidly, with many large firms rolling out internal copilots, code assistants, customer‑service bots, and document‑analysis tools, often powered by frontier APIs or strong open‑source models.
- On the consumer side, AI features (image generation, summarization, translation, smart replies, etc.) are now standard in messaging, productivity, creative tools, and smartphones.
-
Capital, headcount, and infrastructure growth
- Capital flows into AI have grown dramatically since early 2023: multi‑billion‑dollar strategic investments in frontier‑model companies, massive GPU/accelerator buildouts at hyperscalers, and large private rounds for AI startups across sectors (foundation models, agents, vertical applications, infrastructure).
- Hyperscalers have raced to secure GPUs and build custom AI accelerators, and global AI compute capacity has grown sharply year‑over‑year—an essential signal that development capacity is expanding, not pausing.
-
Regulatory and "pause" efforts have not produced a slowdown in core development
- In March 2023, an open letter from the Future of Life Institute called for a 6‑month pause on training systems more powerful than GPT‑4. Despite significant publicity, there is no evidence that major labs actually paused or reduced the pace of R&D; instead, they continued training and deploying more advanced models.
- The U.S. and EU have advanced regulatory efforts (e.g., the EU AI Act negotiations, the U.S. AI Executive Order, voluntary safety commitments by major labs), but these have not imposed blanket moratoria or substantial slowdowns in frontier‑model training or deployment across the industry. Instead, the pattern has been: continue or accelerate development while adding governance, safety, and reporting layers.
-
Open‑source and academic activity exploded
- Since LLaMA’s initial leak and later official releases, the open‑source community has produced a rapid cascade of model variants, fine‑tunes, and new architectures. This includes lightweight, on‑device‑capable models and specialized models for code, vision, audio, and agents.
- Tools, libraries, and frameworks for building AI applications (RLHF pipelines, orchestration frameworks, evaluation tools, agents, vector databases, etc.) have multiplied, further reducing friction to deploying AI.
Overall assessment vs. the prediction
- The prediction was: “Over the coming years, the pace of AI innovation and deployment will not slow in response to calls for a pause; instead, development activity and progress in AI will accelerate relative to the 2022–early 2023 baseline.”
- Between March 31, 2023 and November 30, 2025, we observe:
- No industry‑wide slowdown or moratorium attributable to pause letters or similar advocacy.
- A clear increase in the rate of major model releases, infrastructure build‑out, funding, and commercial deployment compared with 2020–2022.
- Regulatory efforts that largely coexist with, rather than significantly slow, AI R&D.
Given the available evidence as of November 30, 2025, the prediction that AI would not slow down but instead speed up in the face of pause calls is substantially borne out.